[PATCH 4/5] KVM: selftests: Add a test for KVM_RUN+rseq to detect task migration bugs
Mathieu Desnoyers
mathieu.desnoyers at efficios.com
Fri Aug 20 07:52:36 AEST 2021
----- On Aug 17, 2021, at 8:12 PM, Sean Christopherson seanjc at google.com wrote:
> Add a test to verify an rseq's CPU ID is updated correctly if the task is
> migrated while the kernel is handling KVM_RUN. This is a regression test
> for a bug introduced by commit 72c3c0fe54a3 ("x86/kvm: Use generic xfer
> to guest work function"), where TIF_NOTIFY_RESUME would be cleared by KVM
> without updating rseq, leading to a stale CPU ID and other badness.
>
> Signed-off-by: Sean Christopherson <seanjc at google.com>
> ---
[...]
> +
> +static void *migration_worker(void *ign)
> +{
> + cpu_set_t allowed_mask;
> + int r, i, nr_cpus, cpu;
> +
> + CPU_ZERO(&allowed_mask);
> +
> + nr_cpus = CPU_COUNT(&possible_mask);
> +
> + for (i = 0; i < 20000; i++) {
> + cpu = i % nr_cpus;
> + if (!CPU_ISSET(cpu, &possible_mask))
> + continue;
> +
> + CPU_SET(cpu, &allowed_mask);
> +
> + r = sched_setaffinity(0, sizeof(allowed_mask), &allowed_mask);
> + TEST_ASSERT(!r, "sched_setaffinity failed, errno = %d (%s)", errno,
> + strerror(errno));
> +
> + CPU_CLR(cpu, &allowed_mask);
> +
> + usleep(10);
> + }
> + done = true;
> + return NULL;
> +}
> +
> +int main(int argc, char *argv[])
> +{
> + struct kvm_vm *vm;
> + u32 cpu, rseq_cpu;
> + int r;
> +
> + /* Tell stdout not to buffer its content */
> + setbuf(stdout, NULL);
> +
> + r = sched_getaffinity(0, sizeof(possible_mask), &possible_mask);
> + TEST_ASSERT(!r, "sched_getaffinity failed, errno = %d (%s)", errno,
> + strerror(errno));
> +
> + if (CPU_COUNT(&possible_mask) < 2) {
> + print_skip("Only one CPU, task migration not possible\n");
> + exit(KSFT_SKIP);
> + }
> +
> + sys_rseq(0);
> +
> + /*
> + * Create and run a dummy VM that immediately exits to userspace via
> + * GUEST_SYNC, while concurrently migrating the process by setting its
> + * CPU affinity.
> + */
> + vm = vm_create_default(VCPU_ID, 0, guest_code);
> +
> + pthread_create(&migration_thread, NULL, migration_worker, 0);
> +
> + while (!done) {
> + vcpu_run(vm, VCPU_ID);
> + TEST_ASSERT(get_ucall(vm, VCPU_ID, NULL) == UCALL_SYNC,
> + "Guest failed?");
> +
> + cpu = sched_getcpu();
> + rseq_cpu = READ_ONCE(__rseq.cpu_id);
> +
> + /*
> + * Verify rseq's CPU matches sched's CPU, and that sched's CPU
> + * is stable. This doesn't handle the case where the task is
> + * migrated between sched_getcpu() and reading rseq, and again
> + * between reading rseq and sched_getcpu(), but in practice no
> + * false positives have been observed, while on the other hand
> + * blocking migration while this thread reads CPUs messes with
> + * the timing and prevents hitting failures on a buggy kernel.
> + */
I think you could get a stable cpu id between sched_getcpu and __rseq_abi.cpu_id
if you add a pthread mutex to protect:
sched_getcpu and __rseq_abi.cpu_id reads
vs
sched_setaffinity calls within the migration thread.
Thoughts ?
Thanks,
Mathieu
> + TEST_ASSERT(rseq_cpu == cpu || cpu != sched_getcpu(),
> + "rseq CPU = %d, sched CPU = %d\n", rseq_cpu, cpu);
> + }
> +
> + pthread_join(migration_thread, NULL);
> +
> + kvm_vm_free(vm);
> +
> + sys_rseq(RSEQ_FLAG_UNREGISTER);
> +
> + return 0;
> +}
> --
> 2.33.0.rc1.237.g0d66db33f3-goog
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
More information about the Linuxppc-dev
mailing list