From nobody Tue Dec 16 14:43:48 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3CD81380F5D for ; Thu, 30 Oct 2025 20:10:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761855026; cv=none; b=KAczzfvrjTagIpPzeT8Z5jpPJuv9GTQHbHx26+VCYdQ4GfxdUMojiAzQckzzwZDrjz3xKNGX4ed4RqvesCas+xUZJXtf8/3CUtCvxMfdCyrpPV7sWNCLCb4sIBcS9FNCzvDyX2hwl6PzjDvVEWt/V6ICTBOfEzpSSeZkEBRPzSU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761855026; c=relaxed/simple; bh=VPvA8JSjcYzb7PAL2NCANYZ0tcPgx4vpcgMWgfs85d0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=BMZC/X9zTlooslo83Ft4Na1x1b99rNAqrj0SmvZMe5o6p0WQrTZKg/ygsIjomRCzppZp8q2xS+ArZzZ/x/sqPYPenvANMAasEWuOFw0zjC33pbuSYPER3pM7/KyVIRJ8bWiiIRhKLQtozSth8cOvIQ4NJsXF3vBYVT1DtiziNTk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=RAsAjtKF; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="RAsAjtKF" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-33bb4d11f5eso1783633a91.3 for ; Thu, 30 Oct 2025 13:10:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1761855024; x=1762459824; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=IvM7TjIntZ9FlQ6yPFIX7Pdb82bcYcGNbKTNQ5X1yLU=; b=RAsAjtKFqBFhMwFq6toP5dP0KTxOgS1mv6Bb6GEoY/sJqhjeRUT+156qucBNkuQd9z vhTCqmDN8XamGFLJxNtUL5mE79I6366jAxdlaXRywFBGRyX9GlWRCpwXZj9xtFxI00kw 3uFshrpVjk9O+Q4Y9Sj0ZPfpGqyHgrGWwFz9jodRrO8LukyqdLxP/aCxBxOn6F6aW7V+ 2kKCjwJly6tNcgLL8l8PorUxJQKBexh870eo4Mfk06ZD8Q3OBtjKaBDPeHVJWRN7jXti f6r+Z2iLC32gC/7uUcA/LPFtmRdoCNh5fWtN4i2ojTyCPKmFtnaddc2NoQq75bE1ARKd Mmeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761855024; x=1762459824; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=IvM7TjIntZ9FlQ6yPFIX7Pdb82bcYcGNbKTNQ5X1yLU=; b=flkkbDUwZlv0CtbNqggLCbc2AzLdDu+JlpnMGDkgwk2FmfUUUOFfM1UfWYhEj0uxNF 5xXH0iZnlt2sgtDUy6F02ReWBfABDaOkSqVU9NPI2awzuD8Oy19sRAr7MhgABNdH5aeh +jBxAo8w5qmZSeKI2wh4tQAutY+m1l2gqByZ8WzBrgRUZh3ix/bOR65dm9V1DMuitSZd jtSdkFlu6b3D3I5BiAaf6VK7MubpwAMhXK+P08HB4XcxEpF13VJbtcqmL0KMp+CN7Od+ mWbFNlNyQLEx/loQgOrEePI4HAnm4IBFKdL8WZMlEYZudPFYQOGw/1/zOZH1V5lKas+P R8eQ== X-Forwarded-Encrypted: i=1; AJvYcCUeAM5KqUGP39NBno8lbEVjmGbQyoMhNSNr25IdRHPBV9eN2E0xqQTGqVhXGFPDXv2igp/AkkuxeveJZQw=@vger.kernel.org X-Gm-Message-State: AOJu0YzWpiqHh2wDpRsDRz1K/n0bW2zGD3E81+Z8yPqDi2c23OxhlFEc QTYgLpdDoZYB1nT90h0Quws56RX/dEF/JQBVkmMUxI0QJRjx3J+pmi2+5ZhEQmNjQfFzsqro8rw XQ3IZXw== X-Google-Smtp-Source: AGHT+IGgb27AO5v2WDGNnxGkmz0o+KOcj5EzJAo9YzbL1iR1u4hmGEpmIxfQoNCUxUiI3kNavEPI4qshnw4= X-Received: from pjdr2.prod.google.com ([2002:a17:90a:2e82:b0:340:8a98:4706]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3947:b0:340:6fae:1a0c with SMTP id 98e67ed59e1d1-34083070e80mr1280206a91.21.1761855023579; Thu, 30 Oct 2025 13:10:23 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 30 Oct 2025 13:09:33 -0700 In-Reply-To: <20251030200951.3402865-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251030200951.3402865-1-seanjc@google.com> X-Mailer: git-send-email 2.51.1.930.gacf6e81ea2-goog Message-ID: <20251030200951.3402865-11-seanjc@google.com> Subject: [PATCH v4 10/28] KVM: TDX: Fold tdx_sept_drop_private_spte() into tdx_sept_remove_private_spte() From: Sean Christopherson To: Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Madhavan Srinivasan , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson , Paolo Bonzini , "Kirill A. Shutemov" Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvm@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, x86@kernel.org, linux-coco@lists.linux.dev, linux-kernel@vger.kernel.org, Ira Weiny , Kai Huang , Binbin Wu , Michael Roth , Yan Zhao , Vishal Annapurve , Rick Edgecombe , Ackerley Tng Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Fold tdx_sept_drop_private_spte() into tdx_sept_remove_private_spte() as a step towards having "remove" be the only and only function that deals with removing/zapping/dropping a SPTE, e.g. to avoid having to differentiate between "zap", "drop", and "remove". Eliminating the "drop" helper also gets rid of what is effectively dead code due to redundant checks, e.g. on an HKID being assigned. No functional change intended. Reviewed-by: Binbin Wu Reviewed-by: Kai Huang Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/tdx.c | 90 +++++++++++++++++++----------------------- 1 file changed, 40 insertions(+), 50 deletions(-) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index c242d73b6a7b..abea9b3d08cf 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1648,55 +1648,6 @@ static int tdx_sept_set_private_spte(struct kvm *kvm= , gfn_t gfn, return tdx_mem_page_record_premap_cnt(kvm, gfn, level, pfn); } =20 -static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn, - enum pg_level level, struct page *page) -{ - int tdx_level =3D pg_level_to_tdx_sept_level(level); - struct kvm_tdx *kvm_tdx =3D to_kvm_tdx(kvm); - gpa_t gpa =3D gfn_to_gpa(gfn); - u64 err, entry, level_state; - - /* TODO: handle large pages. */ - if (KVM_BUG_ON(level !=3D PG_LEVEL_4K, kvm)) - return -EIO; - - if (KVM_BUG_ON(!is_hkid_assigned(kvm_tdx), kvm)) - return -EIO; - - /* - * When zapping private page, write lock is held. So no race condition - * with other vcpu sept operation. - * Race with TDH.VP.ENTER due to (0-step mitigation) and Guest TDCALLs. - */ - err =3D tdh_mem_page_remove(&kvm_tdx->td, gpa, tdx_level, &entry, - &level_state); - - if (unlikely(tdx_operand_busy(err))) { - /* - * The second retry is expected to succeed after kicking off all - * other vCPUs and prevent them from invoking TDH.VP.ENTER. - */ - tdx_no_vcpus_enter_start(kvm); - err =3D tdh_mem_page_remove(&kvm_tdx->td, gpa, tdx_level, &entry, - &level_state); - tdx_no_vcpus_enter_stop(kvm); - } - - if (KVM_BUG_ON(err, kvm)) { - pr_tdx_error_2(TDH_MEM_PAGE_REMOVE, err, entry, level_state); - return -EIO; - } - - err =3D tdh_phymem_page_wbinvd_hkid((u16)kvm_tdx->hkid, page); - - if (KVM_BUG_ON(err, kvm)) { - pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err); - return -EIO; - } - tdx_quirk_reset_page(page); - return 0; -} - static int tdx_sept_link_private_spt(struct kvm *kvm, gfn_t gfn, enum pg_level level, void *private_spt) { @@ -1858,7 +1809,11 @@ static int tdx_sept_free_private_spt(struct kvm *kvm= , gfn_t gfn, static int tdx_sept_remove_private_spte(struct kvm *kvm, gfn_t gfn, enum pg_level level, kvm_pfn_t pfn) { + int tdx_level =3D pg_level_to_tdx_sept_level(level); + struct kvm_tdx *kvm_tdx =3D to_kvm_tdx(kvm); struct page *page =3D pfn_to_page(pfn); + gpa_t gpa =3D gfn_to_gpa(gfn); + u64 err, entry, level_state; int ret; =20 /* @@ -1869,6 +1824,10 @@ static int tdx_sept_remove_private_spte(struct kvm *= kvm, gfn_t gfn, if (KVM_BUG_ON(!is_hkid_assigned(to_kvm_tdx(kvm)), kvm)) return -EIO; =20 + /* TODO: handle large pages. */ + if (KVM_BUG_ON(level !=3D PG_LEVEL_4K, kvm)) + return -EIO; + ret =3D tdx_sept_zap_private_spte(kvm, gfn, level, page); if (ret <=3D 0) return ret; @@ -1879,7 +1838,38 @@ static int tdx_sept_remove_private_spte(struct kvm *= kvm, gfn_t gfn, */ tdx_track(kvm); =20 - return tdx_sept_drop_private_spte(kvm, gfn, level, page); + /* + * When zapping private page, write lock is held. So no race condition + * with other vcpu sept operation. + * Race with TDH.VP.ENTER due to (0-step mitigation) and Guest TDCALLs. + */ + err =3D tdh_mem_page_remove(&kvm_tdx->td, gpa, tdx_level, &entry, + &level_state); + + if (unlikely(tdx_operand_busy(err))) { + /* + * The second retry is expected to succeed after kicking off all + * other vCPUs and prevent them from invoking TDH.VP.ENTER. + */ + tdx_no_vcpus_enter_start(kvm); + err =3D tdh_mem_page_remove(&kvm_tdx->td, gpa, tdx_level, &entry, + &level_state); + tdx_no_vcpus_enter_stop(kvm); + } + + if (KVM_BUG_ON(err, kvm)) { + pr_tdx_error_2(TDH_MEM_PAGE_REMOVE, err, entry, level_state); + return -EIO; + } + + err =3D tdh_phymem_page_wbinvd_hkid((u16)kvm_tdx->hkid, page); + if (KVM_BUG_ON(err, kvm)) { + pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err); + return -EIO; + } + + tdx_quirk_reset_page(page); + return 0; } =20 void tdx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode, --=20 2.51.1.930.gacf6e81ea2-goog