[PATCH 3/11] powerpc: Seperate usage of KERNELBASE and PAGE_OFFSET
Mike Kravetz
kravetz at us.ibm.com
Tue Dec 6 06:25:52 EST 2005
On Sun, Dec 04, 2005 at 06:39:20PM +0000, Michael Ellerman wrote:
> Index: kexec/arch/powerpc/mm/hash_utils_64.c
> ===================================================================
> --- kexec.orig/arch/powerpc/mm/hash_utils_64.c
> +++ kexec/arch/powerpc/mm/hash_utils_64.c
> @@ -456,7 +456,7 @@ void __init htab_initialize(void)
>
> /* create bolted the linear mapping in the hash table */
> for (i=0; i < lmb.memory.cnt; i++) {
> - base = lmb.memory.region[i].base + KERNELBASE;
> + base = (unsigned long)__va(lmb.memory.region[i].base);
> size = lmb.memory.region[i].size;
I think you will want to make a similar change to the routine add_memory()
in powerpc/mm/mem.c. This routine was based on htab_initialize's call to
htab_bolt_mapping().
int __devinit add_memory(u64 start, u64 size)
{
struct pglist_data *pgdata = NODE_DATA(0);
struct zone *zone;
unsigned long start_pfn = start >> PAGE_SHIFT;
unsigned long nr_pages = size >> PAGE_SHIFT;
start += KERNELBASE;
create_section_mapping(start, start + size);
/* this should work for most non-highmem platforms */
zone = pgdata->node_zones;
return __add_pages(zone, start_pfn, nr_pages);
return 0;
}
--
Mike
More information about the Linuxppc64-dev
mailing list