[PATCH v4 02/16] KVM: PPC: Use RCU for arch.spapr_tce_tables
Paul Mackerras
paulus at samba.org
Thu Aug 21 15:25:08 EST 2014
On Wed, Jul 30, 2014 at 07:31:21PM +1000, Alexey Kardashevskiy wrote:
> At the moment spapr_tce_tables is not protected against races. This makes
> use of RCU-variants of list helpers. As some bits are executed in real
> mode, this makes use of just introduced list_for_each_entry_rcu_notrace().
>
> This converts release_spapr_tce_table() to a RCU scheduled handler.
...
> @@ -106,7 +108,8 @@ long kvm_vm_ioctl_create_spapr_tce(struct kvm *kvm,
> int i;
>
> /* Check this LIOBN hasn't been previously allocated */
> - list_for_each_entry(stt, &kvm->arch.spapr_tce_tables, list) {
> + list_for_each_entry_rcu_notrace(stt, &kvm->arch.spapr_tce_tables,
> + list) {
> if (stt->liobn == args->liobn)
> return -EBUSY;
> }
You need something to protect this traversal of the list. In fact...
> @@ -131,7 +134,7 @@ long kvm_vm_ioctl_create_spapr_tce(struct kvm *kvm,
> kvm_get_kvm(kvm);
>
> mutex_lock(&kvm->lock);
> - list_add(&stt->list, &kvm->arch.spapr_tce_tables);
> + list_add_rcu(&stt->list, &kvm->arch.spapr_tce_tables);
>
> mutex_unlock(&kvm->lock);
>
since it is the kvm->lock mutex that protects the list against
changes, you should do the check for whether the liobn is already in
the list inside the same mutex_lock ... unlock pair that protects the
list_add_rcu.
Or, if there is some other reason why two threads can't race and both
add the same liobn here, you need to explain that (and in that case
it's presumably unnecessary to hold kvm->lock).
> diff --git a/arch/powerpc/kvm/book3s_64_vio_hv.c b/arch/powerpc/kvm/book3s_64_vio_hv.c
> index 89e96b3..b1914d9 100644
> --- a/arch/powerpc/kvm/book3s_64_vio_hv.c
> +++ b/arch/powerpc/kvm/book3s_64_vio_hv.c
> @@ -50,7 +50,8 @@ long kvmppc_h_put_tce(struct kvm_vcpu *vcpu, unsigned long liobn,
> /* udbg_printf("H_PUT_TCE(): liobn=0x%lx ioba=0x%lx, tce=0x%lx\n", */
> /* liobn, ioba, tce); */
>
> - list_for_each_entry(stt, &kvm->arch.spapr_tce_tables, list) {
> + list_for_each_entry_rcu_notrace(stt, &kvm->arch.spapr_tce_tables,
> + list) {
> if (stt->liobn == liobn) {
> unsigned long idx = ioba >> SPAPR_TCE_SHIFT;
> struct page *page;
> @@ -82,7 +83,8 @@ long kvmppc_h_get_tce(struct kvm_vcpu *vcpu, unsigned long liobn,
> struct kvm *kvm = vcpu->kvm;
> struct kvmppc_spapr_tce_table *stt;
>
> - list_for_each_entry(stt, &kvm->arch.spapr_tce_tables, list) {
> + list_for_each_entry_rcu_notrace(stt, &kvm->arch.spapr_tce_tables,
> + list) {
> if (stt->liobn == liobn) {
> unsigned long idx = ioba >> SPAPR_TCE_SHIFT;
> struct page *page;
What protects these list iterations? Do we know that interrupts are
always disabled here, or that the caller has done an rcu_read_lock or
something? It seems a little odd that you haven't added any calls to
rcu_read_lock/unlock().
Paul.
More information about the Linuxppc-dev
mailing list