From nobody Sun Feb 8 20:32:55 2026 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C8B3B2C21D4 for ; Fri, 17 Oct 2025 00:33:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760661212; cv=none; b=pSO6QHk451KvsjBLCXi7ANGMhdsn8qBVeE5QJdy6sUmmrXaDLPMSyndypZTgc0fBT6xht0CYUdR0hhONijmg6casVSSRziymd3UnknA82sB5zCMIZxn+J8ESlLFAtifHx38rvl3W6WGkbjbIZ+YAMbGmYs5kl6UWfVmVoEhGK6Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760661212; c=relaxed/simple; bh=gM7rt+rTY18BYUW91TafNBRopkYIdhlbY6uHrLwvGao=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=YTeTD/D0bj06xWjZqOvC6Mh9cb6QfaixdslEDESBcJTgsl20NT5Qcxw/hkcxpGNzIIU4tiQ/aT9YNYP0jT5oG0Y5rDVrsMCAGht4b2Dx4y/JPfonsjbNDOiLNW6CPRECo61OOaAqvaInkmwGgaKqKsmo8xGS2TTS2O2KLtzeBUQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=1SwgD2f7; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="1SwgD2f7" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-33ba9047881so1921241a91.1 for ; Thu, 16 Oct 2025 17:33:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1760661210; x=1761266010; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=Ngwce4OaquS4InkYDa/fl+U0GZsEjplrgThj8MTuVYg=; b=1SwgD2f7YWddkLchkzKcsW4d4rZNMYcMSwC5ZRl8o5UtXqjEQ5my04rS63foOmCEy6 +1ykSfPkpe/Sw3hgnHOE9+21/HZ4zoOJEkmYeo9hrTsIPOKjhFutmkwzqJrq0c3bmCW9 PI6b35ci8d9bpgcPsnHL2KhFMjaUoXednnw/zSMOehZBrwhRrAVKupMmG4hTCkOksf0E Po/8kDprB/xsP1FqYGwtMTx9JjsUo8LOHfOkHuGPaX2i/tAQxNR2PCHZSs7DkKoOtRCX i15c9BxRVaA/BciLPX1gRZEKROa+MGRKOm3im/T/IMpZjuWFOQrAkVLh5VV3Q9msOdWN mZYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760661210; x=1761266010; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Ngwce4OaquS4InkYDa/fl+U0GZsEjplrgThj8MTuVYg=; b=enFaTtrJy1GTkdkUkqUpJwmEEDkiMNW4wimwF8vAvRAwKtn5uz9S5Q7q31cjONrkSk PhSxv7yHGw/RS49d6w2pKeJM5XvaNyQ5hziuuzJbmb2ymhtvrnEB6+tbWFFCfnNKFWkk nNwa5ImhJE/bzUU/1XJm3yHMn/NDOr2MWeZOzHhfCQhTWIvaXBhKvt/aZZGjOQ7TRqbB 36gC2BQpAf0MNq+dZx2xceW2Sb8IO1qKZ2QK6TVpQehi2K8x6TRKrXLC03i+wJRj+JUn /e079FtV3fSRpCKJrUJtJ9KcfTqdqO8cz5gKPYTUPndv7xGpbuxSqaQ7iFeeyWYXgckD f7Fg== X-Forwarded-Encrypted: i=1; AJvYcCWxeljZm116KXRIR5UAPR21uhn0agZjMXQ80ccRXsZHHrOQw4rUR0GO0NEwKo5UZ+4zKK1zx78SzGG35EI=@vger.kernel.org X-Gm-Message-State: AOJu0Yw3QRsAC1hihz4T2nyRMtrxTIbxFfiszRDoRFEmk0emKQS8bwcl KjFjxlWu/Jtzz6pXAejgCHEkq7D7A230gxSOt9kWKSJ2/oFbNTLL7hdfSHY8sPomkhmi3q772Zb s93uh+Q== X-Google-Smtp-Source: AGHT+IEKHDX7XekpTQV5XLScyWGD23ZPyNWCCYdFpMOGeqIT3rMk3IOTs/Q4qsdgwOMQLd3yDk0jpbRIcJA= X-Received: from pjua3.prod.google.com ([2002:a17:90a:cb83:b0:33b:be14:2b68]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:1d09:b0:339:e8c7:d47d with SMTP id 98e67ed59e1d1-33bc9c11c65mr2828717a91.9.1760661210238; Thu, 16 Oct 2025 17:33:30 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 16 Oct 2025 17:32:42 -0700 In-Reply-To: <20251017003244.186495-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251017003244.186495-1-seanjc@google.com> X-Mailer: git-send-email 2.51.0.858.gf9c4a03a3a-goog Message-ID: <20251017003244.186495-25-seanjc@google.com> Subject: [PATCH v3 24/25] KVM: TDX: Guard VM state transitions with "all" the locks From: Sean Christopherson To: Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Madhavan Srinivasan , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson , Paolo Bonzini , "Kirill A. Shutemov" Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvm@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, x86@kernel.org, linux-coco@lists.linux.dev, linux-kernel@vger.kernel.org, Ira Weiny , Kai Huang , Michael Roth , Yan Zhao , Vishal Annapurve , Rick Edgecombe , Ackerley Tng , Binbin Wu Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Acquire kvm->lock, kvm->slots_lock, and all vcpu->mutex locks when servicing ioctls that (a) transition the TD to a new state, i.e. when doing INIT or FINALIZE or (b) are only valid if the TD is in a specific state, i.e. when initializing a vCPU or memory region. Acquiring "all" the locks fixes several KVM_BUG_ON() situations where a SEAMCALL can fail due to racing actions, e.g. if tdh_vp_create() contends with either tdh_mr_extend() or tdh_mr_finalize(). For all intents and purposes, the paths in question are fully serialized, i.e. there's no reason to try and allow anything remotely interesting to happen. Smack 'em with a big hammer instead of trying to be "nice". Acquire kvm->lock to prevent VM-wide things from happening, slots_lock to prevent kvm_mmu_zap_all_fast(), and _all_ vCPU mutexes to prevent vCPUs from interefering. Use the recently-renamed kvm_arch_vcpu_unlocked_ioctl() to service the vCPU-scoped ioctls to avoid a lock inversion problem, e.g. due to taking vcpu->mutex outside kvm->lock. See also commit ecf371f8b02d ("KVM: SVM: Reject SEV{-ES} intra host migration if vCPU creation is in-flight"), which fixed a similar bug with SEV intra-host migration where an in-flight vCPU creation could race with a VM-wide state transition. Define a fancy new CLASS to handle the lock+check =3D> unlock logic with guard()-like syntax: CLASS(tdx_vm_state_guard, guard)(kvm); if (IS_ERR(guard)) return PTR_ERR(guard); to simplify juggling the many locks. Note! Take kvm->slots_lock *after* all vcpu->mutex locks, as per KVM's soon-to-be-documented lock ordering rules[1]. Link: https://lore.kernel.org/all/20251016235538.171962-1-seanjc@google.com= [1] Reported-by: Yan Zhao Closes: https://lore.kernel.org/all/aLFiPq1smdzN3Ary@yzhao56-desk.sh.intel.= com Signed-off-by: Sean Christopherson Suggested-by: Kai Huang --- arch/x86/kvm/vmx/tdx.c | 63 +++++++++++++++++++++++++++++++++++------- 1 file changed, 53 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 84b5fe654c99..d6541b08423f 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -2632,6 +2632,46 @@ static int tdx_read_cpuid(struct kvm_vcpu *vcpu, u32= leaf, u32 sub_leaf, return -EIO; } =20 +typedef void *tdx_vm_state_guard_t; + +static tdx_vm_state_guard_t tdx_acquire_vm_state_locks(struct kvm *kvm) +{ + int r; + + mutex_lock(&kvm->lock); + + if (kvm->created_vcpus !=3D atomic_read(&kvm->online_vcpus)) { + r =3D -EBUSY; + goto out_err; + } + + r =3D kvm_lock_all_vcpus(kvm); + if (r) + goto out_err; + + /* + * Note the unintuitive ordering! vcpu->mutex must be taken outside + * kvm->slots_lock! + */ + mutex_lock(&kvm->slots_lock); + return kvm; + +out_err: + mutex_unlock(&kvm->lock); + return ERR_PTR(r); +} + +static void tdx_release_vm_state_locks(struct kvm *kvm) +{ + mutex_unlock(&kvm->slots_lock); + kvm_unlock_all_vcpus(kvm); + mutex_unlock(&kvm->lock); +} + +DEFINE_CLASS(tdx_vm_state_guard, tdx_vm_state_guard_t, + if (!IS_ERR(_T)) tdx_release_vm_state_locks(_T), + tdx_acquire_vm_state_locks(kvm), struct kvm *kvm); + static int tdx_td_init(struct kvm *kvm, struct kvm_tdx_cmd *cmd) { struct kvm_tdx_init_vm __user *user_data =3D u64_to_user_ptr(cmd->data); @@ -2644,6 +2684,10 @@ static int tdx_td_init(struct kvm *kvm, struct kvm_t= dx_cmd *cmd) BUILD_BUG_ON(sizeof(*init_vm) !=3D 256 + sizeof_field(struct kvm_tdx_init= _vm, cpuid)); BUILD_BUG_ON(sizeof(struct td_params) !=3D 1024); =20 + CLASS(tdx_vm_state_guard, guard)(kvm); + if (IS_ERR(guard)) + return PTR_ERR(guard); + if (kvm_tdx->state !=3D TD_STATE_UNINITIALIZED) return -EINVAL; =20 @@ -2743,7 +2787,9 @@ static int tdx_td_finalize(struct kvm *kvm, struct kv= m_tdx_cmd *cmd) { struct kvm_tdx *kvm_tdx =3D to_kvm_tdx(kvm); =20 - guard(mutex)(&kvm->slots_lock); + CLASS(tdx_vm_state_guard, guard)(kvm); + if (IS_ERR(guard)) + return PTR_ERR(guard); =20 if (!is_hkid_assigned(kvm_tdx) || kvm_tdx->state =3D=3D TD_STATE_RUNNABLE) return -EINVAL; @@ -2781,8 +2827,6 @@ int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) if (r) return r; =20 - guard(mutex)(&kvm->lock); - switch (tdx_cmd.id) { case KVM_TDX_CAPABILITIES: r =3D tdx_get_capabilities(&tdx_cmd); @@ -3090,8 +3134,6 @@ static int tdx_vcpu_init_mem_region(struct kvm_vcpu *= vcpu, struct kvm_tdx_cmd *c if (tdx->state !=3D VCPU_TD_STATE_INITIALIZED) return -EINVAL; =20 - guard(mutex)(&kvm->slots_lock); - /* Once TD is finalized, the initial guest memory is fixed. */ if (kvm_tdx->state =3D=3D TD_STATE_RUNNABLE) return -EINVAL; @@ -3147,7 +3189,8 @@ static int tdx_vcpu_init_mem_region(struct kvm_vcpu *= vcpu, struct kvm_tdx_cmd *c =20 int tdx_vcpu_unlocked_ioctl(struct kvm_vcpu *vcpu, void __user *argp) { - struct kvm_tdx *kvm_tdx =3D to_kvm_tdx(vcpu->kvm); + struct kvm *kvm =3D vcpu->kvm; + struct kvm_tdx *kvm_tdx =3D to_kvm_tdx(kvm); struct kvm_tdx_cmd cmd; int r; =20 @@ -3155,12 +3198,13 @@ int tdx_vcpu_unlocked_ioctl(struct kvm_vcpu *vcpu, = void __user *argp) if (r) return r; =20 + CLASS(tdx_vm_state_guard, guard)(kvm); + if (IS_ERR(guard)) + return PTR_ERR(guard); + if (!is_hkid_assigned(kvm_tdx) || kvm_tdx->state =3D=3D TD_STATE_RUNNABLE) return -EINVAL; =20 - if (mutex_lock_killable(&vcpu->mutex)) - return -EINTR; - vcpu_load(vcpu); =20 switch (cmd.id) { @@ -3177,7 +3221,6 @@ int tdx_vcpu_unlocked_ioctl(struct kvm_vcpu *vcpu, vo= id __user *argp) =20 vcpu_put(vcpu); =20 - mutex_unlock(&vcpu->mutex); return r; } =20 --=20 2.51.0.858.gf9c4a03a3a-goog