[PATCH v3 24/25] KVM: TDX: Guard VM state transitions with "all" the locks
Yan Zhao
yan.y.zhao at intel.com
Mon Oct 27 20:26:09 AEDT 2025
On Fri, Oct 24, 2025 at 09:57:20AM -0700, Sean Christopherson wrote:
> On Fri, Oct 24, 2025, Yan Zhao wrote:
> > On Thu, Oct 16, 2025 at 05:32:42PM -0700, Sean Christopherson wrote:
> > We may have missed a zap in the guest_memfd punch hole path:
> >
> > The SEAMCALLs tdh_mem_range_block(), tdh_mem_track() tdh_mem_page_remove()
> > in the guest_memfd punch hole path are only protected by the filemap invaliate
> > lock and mmu_lock, so they could contend with v1 version of tdh_vp_init().
> >
> > (I'm writing a selftest to verify this, haven't been able to reproduce
> > tdh_vp_init(v1) returning BUSY yet. However, this race condition should be
> > theoretically possible.)
> >
> > Resources SHARED users EXCLUSIVE users
> > ------------------------------------------------------------------------
> > (1) TDR tdh_mng_rdwr tdh_mng_create
> > tdh_vp_create tdh_mng_add_cx
> > tdh_vp_addcx tdh_mng_init
> > tdh_vp_init(v0) tdh_mng_vpflushdone
> > tdh_vp_enter tdh_mng_key_config
> > tdh_vp_flush tdh_mng_key_freeid
> > tdh_vp_rd_wr tdh_mr_extend
> > tdh_mem_sept_add tdh_mr_finalize
> > tdh_mem_sept_remove tdh_vp_init(v1)
> > tdh_mem_page_aug tdh_mem_page_add
> > tdh_mem_page_remove
> > tdh_mem_range_block
> > tdh_mem_track
> > tdh_mem_range_unblock
> > tdh_phymem_page_reclaim
> >
> > Do you think we can acquire the mmu_lock for cmd KVM_TDX_INIT_VCPU?
>
> Ugh, I'd rather not? Refresh me, what's the story with "v1"? Are we now on v2?
No... We are now on v1.
As in [1], I found that TDX module changed SEAMCALL TDH_VP_INIT behavior to
require exclusive lock on resource TDR when leaf_opcode.version > 0.
Therefore, we moved KVM_TDX_INIT_VCPU to tdx_vcpu_unlocked_ioctl() in patch 22.
[1] https://lore.kernel.org/all/aLa34QCJCXGLk%2Ffl@yzhao56-desk.sh.intel.com/
> If this is effectively limited to deprecated TDX modules, my vote would be to
> ignore the flaw and avoid even more complexity in KVM.
Conversely, the new TDX module has this flaw...
> > > @@ -3155,12 +3198,13 @@ int tdx_vcpu_unlocked_ioctl(struct kvm_vcpu *vcpu, void __user *argp)
> > > if (r)
> > > return r;
> > >
> > > + CLASS(tdx_vm_state_guard, guard)(kvm);
> > Should we move the guard to inside each cmd? Then there's no need to acquire the
> > locks in the default cases.
>
> No, I don't think it's a good tradeoff. We'd also need to move vcpu_{load,put}()
> into the cmd helpers, and theoretically slowing down a bad ioctl invocation due
> to taking extra locks is a complete non-issue.
Fair enough.
More information about the Linuxppc-dev
mailing list