[PATCH 4/5] KVM: PPC: Book3s HV: Implement get_dirty_log using hardware changed bit
Takuya Yoshikawa
yoshikawa.takuya at oss.ntt.co.jp
Mon Dec 26 16:05:20 EST 2011
(2011/12/26 8:35), Paul Mackerras wrote:
> On Fri, Dec 23, 2011 at 02:23:30PM +0100, Alexander Graf wrote:
>
>> So if I read things correctly, this is the only case you're setting
>> pages as dirty. What if you have the following:
>>
>> guest adds HTAB entry x
>> guest writes to page mapped by x
>> guest removes HTAB entry x
>> host fetches dirty log
>
> In that case the dirtiness is preserved in the setting of the
> KVMPPC_RMAP_CHANGED bit in the rmap entry. kvm_test_clear_dirty()
> returns 1 if that bit is set (and clears it). Using the rmap entry
> for this is convenient because (a) we also use it for saving the
> referenced bit when a HTAB entry is removed, and we can transfer both
> R and C over in one operation; (b) we need to be able to save away the
> C bit in real mode, and we already need to get the real-mode address
> of the rmap entry -- if we wanted to save it in a dirty bitmap we'd
> have to do an extra translation to get the real-mode address of the
> dirty bitmap word; (c) to avoid SMP races, if we were asynchronously
> setting bits in the dirty bitmap we'd have to do the double-buffering
> thing that x86 does, which seems more complicated than using the rmap
> entry (which we already have a lock bit for).
From my x86 dirty logging experience I have some concern about your code:
your code looks slow even when there is no/few dirty pages in the slot.
+ for (i = 0; i < memslot->npages; ++i) {
+ if (kvm_test_clear_dirty(kvm, rmapp))
+ __set_bit_le(i, map);
+ ++rmapp;
+ }
The check is being done for each page and this can be very expensive because
the number of pages is not small.
When we scan the dirty_bitmap 64 pages are checked at once and
the problem is not so significant.
Though I do not know well what kvm-ppc's dirty logging is aiming at, I guess
reporting cleanliness without noticeable delay to the user-space is important.
E.g. for VGA most of the cases are clean. For live migration, the
chance of seeing complete clean slot is small but almost all cases
are sparse.
>
>> PS: Always CC kvm at vger for stuff that other might want to review
>> (basically all patches)
(Though I sometimes check kvm-ppc on the archives,)
GET_DIRTY_LOG thing will be welcome.
Takuya
>
> So why do we have a separate kvm-ppc list then? :)
More information about the Linuxppc-dev
mailing list