remapping 4MB of kernel space with remap_pfn_range() and nopage()

john.p.price at john.p.price at
Mon Aug 31 23:28:07 EST 2009

I'm using linux kernel I've been using the mmap system call to
map 4MB of contiguous kernel memory (RAM), obtained by get_free_pages
with order == 10, to user space.  One implementation I have uses
remap_pfn_range(), its seems to work ok, I do have other issues I just
want to make sure its reasonable to use the reamp_pfn_range call for
what I am doing.


Now in LDD3 it says not to use the remap_pfn_range() call because it
only gives access to reserved pages and physical addresses above the top
of physical memory.  LDD3 makes reference to the using the nopage and
notes about maintaining proper reference counts with clusters of pages.
The material in the book on this issue seems dated.


Is there still a limitation with using remap_pfn_range() remap kernel
ram to user space?


So in a test driver I made using the "fault" (previously called nopage)
method for that purpose. 

here's a snipet of the fault callback;


offset = vmf->pgoff << PAGE_SHIFT;
if( offset > dev->dma_buff_size)
printk("ds3b3_vm_fault: SIGBUS - my_offset: %#x vmf_pgoff: %#x
page_shift: %i \n",
offset, vmf->pgoff, PAGE_SHIFT);

addr = (char *)vma->vm_start;
addr += offset;

page = virt_to_page( addr );

vmf->page = page;


When the application loads the module the following is printed on the


          My kprintfs from the fault handler:

<4>ds3b3_vm_fault: entered - vma->vm_start: 0x48000000 vma->vm_end
<4>ds3b3_vm_fault: entered - vma->vm_flags: 0x820fb vma->vm_pgoff 0x0
<4>ds3b3_vm_fault: entered - vmf->flags: 0x1 vmf->pgoff 0x0
vmf->virtual_address: 0x48000000
<4>ds3b3_vm_fault: SUCCESS - vmf->page: 0xc13d1000



Kernel output to the console:
<1>BUG: Bad page map in process dcb pte:880004d2 pmd:0c5ec400
<1>addr:48000000 vm_flags:000820fb anon_vma null) mapping:ce4927d0
<1>vma->vm_ops->fault: ds3b3_vm_fault+0x0/0xf8 [ds3b3]
<1>vma->vm_file->f_op->mmap: ds3b3_nopage_mmap+0x0/0x4c [ds3b3]
<4>Call Trace:
<4>[cd659d80] [c0006bc0] show_stack+0x44/0x16c (unreliable)
<4>[cd659dc0] [c00627c8] print_bad_pte+0x140/0x1cc
<4>[cd659df0] [c00628d0] vm_normal_page+0x7c/0xb4
<4>[cd659e00] [c00630b4] follow_page+0xf4/0x1f0
<4>[cd659e20] [c00645e4] __get_user_pages+0x130/0x3ec
<4>[cd659e80] [c0064b28] make_pages_present+0x8c/0xc4
<4>[cd659e90] [c0066780] mlock_vma_pages_range+0x74/0x9c
<4>[cd659eb0] [c0068efc] mmap_region+0x1dc/0x3c8
<4>[cd659f10] [c00037b8] sys_mmap+0x78/0x100
<4>[cd659f40] [c000e558] ret_from_syscall+0x0/0x3c


I do not understand why the first page of the buffer is determined to be
a bad page? Do I need to perform any initialization to the buffer pages
after allocation and prior to the application calling mmap or do I need
to set a specific vm flag(s)? 

Any comments or advice would be appreciated.




John Price  <john.p.price at
<blocked::mailto:john.p.price at> > 

L-3 Communications
Security & Detection Systems Division, 
10E Commerce Way, Woburn, MA 01801

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 374 bytes
Desc: image001.gif
URL: <>

More information about the Linuxppc-dev mailing list