[Lguest] Lguest mechanism.

Sujit Sanjeev sujit771 at gmail.com
Mon Jun 2 14:57:31 EST 2008


Thanks Rusty, That helped!

>From the host's launcher point of view, at what virtual address ranges
are the guest kernel and guest user space mapped?

Ex: UMLinux maps Host kernel at [0xc0000000, 0xffffffff]
      Guest kernel occupies [0x70000000, 0xc0000000]
      Guest application occupies [0x0, 0x70000000]

Is it the same for Lguest too?

Cheers,
Sujit

On Sun, Jun 1, 2008 at 7:08 PM, Rusty Russell <rusty at rustcorp.com.au> wrote:
>
> On Saturday 31 May 2008 08:16:34 Sujit Sanjeev wrote:
> > Hi Rusty,
> >
> > I was wondering if there are any documents which could briefly explain the
> > structure of the virtual address
> > space of a normal user process executing within a guest.
>
> No single document, but it's conceptually simple.  The documentation does
> explain this on the way through the code.
>
> > Basically, I would like to understand how the traditional 3G/1G
> > (user:kernel) virtual addressing is
> > changed due to execution within a VM. How is the virtual address space of
> > the VMM/host kernel included
> > in the normal 4GB address space of a process?
>
> The guest controls its virtual mappings as normal.  It is the "physical" pages
> of the guest which are really the virtual pages of the launcher:
>
> Guest:                          Host:
>
> Page tables:                    Offset:
> virtual -> guest physical         guest physical -> host virtual
>                                Page tables:
>                                  host virtual -> host physical
>
> So the guest puts what (what it thinks are) physical page numbers in its page
> tables, and the host offsets and maps those to the real physical page numbers
> for the real page tables for the guest.
>
> See "Guest" and "Host" part of documentation.
>
> Cheers,
> Rusty.



More information about the Lguest mailing list