From nobody Sun Feb 8 23:35:07 2026 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 360F821C9FD for ; Fri, 17 Oct 2025 00:33:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760661189; cv=none; b=YJ59f0B7/BKoJCy4DAruJOJenhBwBA17kSCgc8IsMN2Hq+Q7wugwCDcLXaQOllqQX8julj9FrRsoe477mUh0fzPwieVx/DoFE/hkpIGhpw+0bdYtK+LjAP9kzcTY7wFPSlxiviyHG/ObnfuSTU+u+1j+Lnb+u7r2I1wjugB/3X4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760661189; c=relaxed/simple; bh=k9Z9jxwZO5TaNJ3JVHODzpRZ428q5OixM7USLnsTl5s=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=KccWbWswgNrWVPudV9WiUYO6dKEnSXOnNg/b3RsjQtDJd07TdVPJw2jddq7q7P+cDNC7K57VUhdnWdPlTlIv1+fsQQi+uL39f+K/AZ7U+cmnHA2j7RPonv5Yk1stHA7ELcEweqJ17gZQmvvq0bH55R5e49WAcdB+WudMNLgAKnc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=AeJO1Vfg; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="AeJO1Vfg" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-269880a7bd9so17294935ad.3 for ; Thu, 16 Oct 2025 17:33:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1760661186; x=1761265986; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=VqN6FPJY384bIppIse52Rb5Kgtgq051USSGGCGlx7fA=; b=AeJO1VfgJohwvULVXJk2NTW8Jdh/YNcAKx0bjr2W0pS6gFtanW3VoUWwICyRiTgFBC DmB7cxRVzPenINwp6L2vRYSBqvKAjIPtYZfjLywoVnkjU5SbrC0jzX3vJw75OwsYrgkK D6lyWr9icVCPOBGk+3FVogjBeK127/EOzaocvc6dWsdB5gYECn9EXW1RpxraSdKQGJHZ dz6zlXeHolZ4zNMcG9oq7hb4tgFjxSq+bwTJqFAV2kCy+NL0F8JWdKFVp2Jg1Ykuky84 rKRMx1nrmcn9WkFNHp6K9oMOLeGVsojpM0/IHebO+ovaDaEmNky//Q92VD3vn6OjP3C0 crgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760661186; x=1761265986; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=VqN6FPJY384bIppIse52Rb5Kgtgq051USSGGCGlx7fA=; b=pAIbTz0szFMuj95kSL22OWP1CxOorrafyFyRUSmTZfTqhxBjCIYPa1ZuKYyNiRKHDr wPdgZD/FsuZmc0u4CqMmPK/iMwsXx66YI6iVH0y2HfXdyowqTCFSeRafuG5JGKUhcQ1i 9GpU/lirNp1daJ3oCciXu8tY5XKk6rrzqFWldp6ILsJfWgW8WqLgvduOxxPVUIEI/3k1 o06wBvtS7k4YHwY2nEELqB4EXPV60HdTH9oR58VdKAehdkubKZOqPIMoyuogYSrkDNrS +FosE9FJyVc5PUG0m3Hhkub87cUggJDBKgMIcr6OfXcM94Oz8RkZxxViQi59/f4RLSBG lHAA== X-Forwarded-Encrypted: i=1; AJvYcCVHZCA0rQSzhLFOparxUkZiPyIIrqV8AgTranFbd+6Gq5OQUXHNYrEIqfREaSmMvSTZCBqlyXz/Eu6p81s=@vger.kernel.org X-Gm-Message-State: AOJu0Yxd7awo+m9SNfCTC3xSooadjxQl9yqAQKKXRq2BEzzyRQPHmI0Q 099dpUj3c4NK/xsJpswgRyQiiRJkAE/MVJJp4GEG54vphyPEM7Fh5UkZc7My3gYZBnmp87D1V9k JrQ8PmA== X-Google-Smtp-Source: AGHT+IECAB7Q2xyCVxo4b5IYRWMRhmsnO9Lb3GIfoRWMBXN6TTKJmV4zclbiOdup5YJYV9LbxCAcEbMCp+c= X-Received: from pjzg1.prod.google.com ([2002:a17:90a:e581:b0:33b:51fe:1a78]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:d2d0:b0:254:70cb:5b36 with SMTP id d9443c01a7336-290c9cf37b2mr17571675ad.8.1760661186063; Thu, 16 Oct 2025 17:33:06 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 16 Oct 2025 17:32:28 -0700 In-Reply-To: <20251017003244.186495-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251017003244.186495-1-seanjc@google.com> X-Mailer: git-send-email 2.51.0.858.gf9c4a03a3a-goog Message-ID: <20251017003244.186495-11-seanjc@google.com> Subject: [PATCH v3 10/25] KVM: x86/mmu: Drop the return code from kvm_x86_ops.remove_external_spte() From: Sean Christopherson To: Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Madhavan Srinivasan , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson , Paolo Bonzini , "Kirill A. Shutemov" Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvm@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, x86@kernel.org, linux-coco@lists.linux.dev, linux-kernel@vger.kernel.org, Ira Weiny , Kai Huang , Michael Roth , Yan Zhao , Vishal Annapurve , Rick Edgecombe , Ackerley Tng , Binbin Wu Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Drop the return code from kvm_x86_ops.remove_external_spte(), a.k.a. tdx_sept_remove_private_spte(), as KVM simply does a KVM_BUG_ON() failure, and that KVM_BUG_ON() is redundant since all error paths in TDX also do a KVM_BUG_ON(). Opportunistically pass the spte instead of the pfn, as the API is clearly about removing an spte. Suggested-by: Rick Edgecombe Reviewed-by: Binbin Wu Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 4 ++-- arch/x86/kvm/mmu/tdp_mmu.c | 8 ++------ arch/x86/kvm/vmx/tdx.c | 17 ++++++++--------- 3 files changed, 12 insertions(+), 17 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 48598d017d6f..7e92aebd07e8 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1855,8 +1855,8 @@ struct kvm_x86_ops { void *external_spt); =20 /* Update external page table from spte getting removed, and flush TLB. */ - int (*remove_external_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level lev= el, - kvm_pfn_t pfn_for_gfn); + void (*remove_external_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level le= vel, + u64 spte); =20 bool (*has_wbinvd_exit)(void); =20 diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 9b4006c2120e..c09c25f3f93b 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -362,9 +362,6 @@ static void tdp_mmu_unlink_sp(struct kvm *kvm, struct k= vm_mmu_page *sp) static void remove_external_spte(struct kvm *kvm, gfn_t gfn, u64 old_spte, int level) { - kvm_pfn_t old_pfn =3D spte_to_pfn(old_spte); - int ret; - /* * External (TDX) SPTEs are limited to PG_LEVEL_4K, and external * PTs are removed in a special order, involving free_external_spt(). @@ -377,9 +374,8 @@ static void remove_external_spte(struct kvm *kvm, gfn_t= gfn, u64 old_spte, =20 /* Zapping leaf spte is allowed only when write lock is held. */ lockdep_assert_held_write(&kvm->mmu_lock); - /* Because write lock is held, operation should success. */ - ret =3D kvm_x86_call(remove_external_spte)(kvm, gfn, level, old_pfn); - KVM_BUG_ON(ret, kvm); + + kvm_x86_call(remove_external_spte)(kvm, gfn, level, old_spte); } =20 /** diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index abea9b3d08cf..f5cbcbf4e663 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1806,12 +1806,12 @@ static int tdx_sept_free_private_spt(struct kvm *kv= m, gfn_t gfn, return tdx_reclaim_page(virt_to_page(private_spt)); } =20 -static int tdx_sept_remove_private_spte(struct kvm *kvm, gfn_t gfn, - enum pg_level level, kvm_pfn_t pfn) +static void tdx_sept_remove_private_spte(struct kvm *kvm, gfn_t gfn, + enum pg_level level, u64 spte) { + struct page *page =3D pfn_to_page(spte_to_pfn(spte)); int tdx_level =3D pg_level_to_tdx_sept_level(level); struct kvm_tdx *kvm_tdx =3D to_kvm_tdx(kvm); - struct page *page =3D pfn_to_page(pfn); gpa_t gpa =3D gfn_to_gpa(gfn); u64 err, entry, level_state; int ret; @@ -1822,15 +1822,15 @@ static int tdx_sept_remove_private_spte(struct kvm = *kvm, gfn_t gfn, * there can't be anything populated in the private EPT. */ if (KVM_BUG_ON(!is_hkid_assigned(to_kvm_tdx(kvm)), kvm)) - return -EIO; + return; =20 /* TODO: handle large pages. */ if (KVM_BUG_ON(level !=3D PG_LEVEL_4K, kvm)) - return -EIO; + return; =20 ret =3D tdx_sept_zap_private_spte(kvm, gfn, level, page); if (ret <=3D 0) - return ret; + return; =20 /* * TDX requires TLB tracking before dropping private page. Do @@ -1859,17 +1859,16 @@ static int tdx_sept_remove_private_spte(struct kvm = *kvm, gfn_t gfn, =20 if (KVM_BUG_ON(err, kvm)) { pr_tdx_error_2(TDH_MEM_PAGE_REMOVE, err, entry, level_state); - return -EIO; + return; } =20 err =3D tdh_phymem_page_wbinvd_hkid((u16)kvm_tdx->hkid, page); if (KVM_BUG_ON(err, kvm)) { pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err); - return -EIO; + return; } =20 tdx_quirk_reset_page(page); - return 0; } =20 void tdx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode, --=20 2.51.0.858.gf9c4a03a3a-goog