Maximum ioremap size for ppc arch?

michael.firth at bt.com michael.firth at bt.com
Wed Dec 5 20:50:16 EST 2007


> -----Original Message-----
> From: Matt Porter [mailto:mporter at kernel.crashing.org] 
> Sent: 03 December 2007 15:30
> To: Firth,MJC,Michael,DMM R
> Cc: linuxppc-embedded at ozlabs.org
> Subject: Re: Maximum ioremap size for ppc arch?
> 
> On Mon, Dec 03, 2007 at 09:22:06AM -0000, michael.firth at bt.com wrote:
> > I'm trying to get am MPC834x system running that has 
> 256MBytes of NOR 
> > flash connected.
> > 
> > The physmap flash driver is failing to ioremap() that 
> amount of space, 
> > while on a similar system with 128Mbytes of flash, there are no 
> > problems.
> > 
> > Is this a known limitation of ioremap() on the ppc architecture, or 
> > specifically the MPC834x family, and is there any 
> (hopefully easy) way 
> > to increase this limit?
> 
> The answer is "it depends". It depends on the amount of 
> system memory you have. By default, your system memory is 
> mapped at 0xc0000000, leaving not enough space for vmalloc 
> allocations to grab 256MB for the ioremap (and avoid the 
> fixed virtual mapping in the high virtual address area).
> 
> See the "Advanced setup" menu. Normally, you can set "Set 
> custom kernel base address" to 0xa0000000 safely. That will 
> give you an additional 256MB of vmalloc space. On 
> arch/powerpc, you'll also have to set "Size of user task 
> space" to 0x80000000 or 0xa0000000.
> 
> -Matt
> 

I've solved my problem for now, but I'm not sure I'm that happy that
it's a scalable solution.

I tried moving the kernel base address to 0x80000000, but the system
became very unstable - in particular, though it detected the flash
partitions, as soon as I tried to write to them the system spontaneously
rebooted - not even a kernel panic, just a straight reboot.

As I'm using arch/ppc, it seems that the default user task space is
0x80000000 which shouldn't have conflicted with this.

It seemed that the bottom of the vmalloc space is defined as 'start of
kernel + amount of physical RAM', which in the case of our board becomes
'0xc0000000 + 0x10000000', as there is 256MB of RAM present. The top of
vmalloc space was being limited by the CPU registers, mapped by IMMRBAR.
This was configured to 0xe0000000, which left only 256MB of vmalloc
space.

I've got things working by moving IMMRBAR up to 0xeff00000, which gives
nearly 256MB more vmalloc space.

My main queries are:
1) Why did changing the kernel base address to 0x80000000 make the
system unstable? Would 0xa0000000 as suggested not have caused this
problem?
2) Currently IMMRBAR has the same physical and virtual address. Does
this need to be the case? If this is a restriction, it seems to mean
that the top 256MB of the virtual address space becomes unusable.
3) Why the kernel is designed to run at 0xc0000000? This seems to leave
only 1GB of addressing space for all the physically addressable memory
(RAM + ioremapped + registers), while reserving 3GB of space for user
processes. The 3GB is presumably mostly unusable on a system without a
large amount of swap, as the 1GB limit on memory will prevent much more
than that being available for user space.

Thanks for the assistance so far, the pointer to the definition of
VMALLOC_START and VMALLOC_END gave me the hook in I needed to work out
where the limitation was coming from. I would suggest that its also
worth changing at the error message that's generated when the vmalloc
space is exhausted:

"allocation failed: out of vmalloc space - use vmalloc=<size> to
increase size."



More information about the Linuxppc-embedded mailing list