[PATCH v3 1/2] KVM: PPC: Book3S HV: Sanitise vcpu registers in nested path

Fabiano Rosas farosas at linux.ibm.com
Wed May 5 04:37:32 AEST 2021


Nicholas Piggin <npiggin at gmail.com> writes:

> An error message when you try to start the nested guest telling you
> pass -machine cap-htm=off would be better... I guess that should
> really all work with caps etc today though so TM's a bad example.
> But assume we don't have a cap for the bit we disable? Maybe we
> should have caps for all HFSCR bits, or I'm just worried about
> something not very important.

I'm avoiding returning an error from H_ENTER_NESTED at first run
specifically because of this. I think it interferes with L1 migration.

Say we have an L1 that has an workload that involves nested guests. It
can boot and run them just fine (i.e. it uses the same HFSCR value for
its guests as L0). If we migrate that L1 into a host that uses different
HFSCR bits and therefore will always fail the H_ENTER_NESTED, that is
effectively the same as migrating into a host that does not provide
KVM_CAP_PPC_NESTED_HV.

We would need some way to inform the migration code that the remote host
will not allow L1 to run nested guests with that particular HFSCR value
so that it can decide whether that host is a suitable migration
target. Otherwise we're migrating the guest into a host that will not
allow its operation to continue.

Returning an error later on during the nested guest lifetime I think it
is less harmful, but nothing really went wrong with the hypercall. It
did its job and ran L2 like L1 asked, although it ignored some bits. Can
we say that L1 misconfigured L2 when there is no way for L1 to negotiate
which bits it can use? The same set of bits could be considered valid by
another L0. It seems that as long as we hardcode some bits we shouldn't
fail the hcall because of them.

I think since this is all a bit theoretical right now, forwarding a
program interrupt into L1 is a good less permanent solution for the
moment, it does not alter the hcall's API and if we start stumbling into
similar issues in the future we'll have more information then to come up
with a proper solution.

>
> Thanks,
> Nick


More information about the Linuxppc-dev mailing list