From nobody Sun Feb 8 00:03:39 2026 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8A7CC330D58; Tue, 6 Jan 2026 10:24:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767695068; cv=none; b=teDiJs68bh5oghQaveXZNH+YY+o3gLeAt+IcG8MGyXDezO5CZasZz8cdSAOb1Gosa+Syvbjml3z+l4pcI0vCee1lKD/mWlBUhReAwAH5z9efq242HuV7k+ItTJl01TYylYVh5wA89bLhvxgwE02I3FiHu+CywSZof6W8we8regM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767695068; c=relaxed/simple; bh=ffK1vQpvyzKmlUzz2zxoihkeD9nVSu7GIDcTNxPg6ZM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=nAHKvustFRbaL8I9p4wmhGHZTjSXJyNn4FxneBVZ2ulEw2lkgZcymxTYocaSf1gWxZyVv6EhEb0wH2fjjsSH64dF/1z3TG8e3NX08qZWW3KT2c/6x9dnayvVFrjGQxtbmBaEbqGI73PQ47yrcc5sPA/B1pDg+b61bBJS4LulHKo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=kauFI1zN; arc=none smtp.client-ip=192.198.163.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="kauFI1zN" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1767695065; x=1799231065; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ffK1vQpvyzKmlUzz2zxoihkeD9nVSu7GIDcTNxPg6ZM=; b=kauFI1zNy553EM7fgpVzVUwd5nzJw/faJZIBs9lji7LKrWgQfje0TrBG +iOxsAMCBtUsKJmILH4bTl46m2HouIEN8CBo+98aCSrORd6XSO048lQrL m81fjugU62C5vu6yKlgi9ym/FrAq2A1mVMufS6GrAUEfgI3W+J6XZsQSY 80OMkEvVCqbzIgsvbHbtHlr7gi7BwhhTskIGnfjPJVjcdjVsLq4gX5OE9 IIScZ2UpvG8X3B3tHf38QXaQwGGtbDYlTOleQK/vQyfUYS+56zPpRJBZK SwxY95pHJyN8uaW+JzSNG+5d0tOoR3sbfxjDYWCisBjIleHhhSkWXe0/t w==; X-CSE-ConnectionGUID: vEFYgxLjSNGz8nDdPRziUA== X-CSE-MsgGUID: 9i6Dut6LQmW8CQxLaUvLIA== X-IronPort-AV: E=McAfee;i="6800,10657,11662"; a="80427348" X-IronPort-AV: E=Sophos;i="6.21,204,1763452800"; d="scan'208";a="80427348" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jan 2026 02:24:24 -0800 X-CSE-ConnectionGUID: qZCS4Z+ISYiXm6jdpoxKCg== X-CSE-MsgGUID: qw4VeN+7TxSY9JjDNSm1xg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,204,1763452800"; d="scan'208";a="202681573" Received: from yzhao56-desk.sh.intel.com ([10.239.47.19]) by orviesa008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jan 2026 02:24:19 -0800 From: Yan Zhao To: pbonzini@redhat.com, seanjc@google.com Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, x86@kernel.org, rick.p.edgecombe@intel.com, dave.hansen@intel.com, kas@kernel.org, tabba@google.com, ackerleytng@google.com, michael.roth@amd.com, david@kernel.org, vannapurve@google.com, sagis@google.com, vbabka@suse.cz, thomas.lendacky@amd.com, nik.borisov@suse.com, pgonda@google.com, fan.du@intel.com, jun.miao@intel.com, francescolavra.fl@gmail.com, jgross@suse.com, ira.weiny@intel.com, isaku.yamahata@intel.com, xiaoyao.li@intel.com, kai.huang@intel.com, binbin.wu@linux.intel.com, chao.p.peng@intel.com, chao.gao@intel.com, yan.y.zhao@intel.com Subject: [PATCH v3 14/24] KVM: Change the return type of gfn_handler_t() from bool to int Date: Tue, 6 Jan 2026 18:22:22 +0800 Message-ID: <20260106102222.25160-1-yan.y.zhao@intel.com> X-Mailer: git-send-email 2.43.2 In-Reply-To: <20260106101646.24809-1-yan.y.zhao@intel.com> References: <20260106101646.24809-1-yan.y.zhao@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Modify the return type of gfn_handler_t() from bool to int. A negative return value indicates failure, while a return value of 1 signifies success with a flush required, and 0 denotes success without a flush required. This adjustment prepares for a later change that will enable kvm_pre_set_memory_attributes() to fail. No functional changes expected. Signed-off-by: Yan Zhao --- v3: - Rebased. RFC v2: - No change RFC v1: - New patch. --- arch/arm64/kvm/mmu.c | 8 ++++---- arch/loongarch/kvm/mmu.c | 8 ++++---- arch/mips/kvm/mmu.c | 6 +++--- arch/powerpc/kvm/book3s.c | 4 ++-- arch/powerpc/kvm/e500_mmu_host.c | 8 ++++---- arch/riscv/kvm/mmu.c | 12 ++++++------ arch/x86/kvm/mmu/mmu.c | 20 ++++++++++---------- include/linux/kvm_host.h | 12 ++++++------ virt/kvm/kvm_main.c | 24 ++++++++++++++++-------- 9 files changed, 55 insertions(+), 47 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 5ab0cfa08343..c39d3ef577f8 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -2221,12 +2221,12 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kv= m_gfn_range *range) return false; } =20 -bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) +int kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { u64 size =3D (range->end - range->start) << PAGE_SHIFT; =20 if (!kvm->arch.mmu.pgt) - return false; + return 0; =20 return KVM_PGT_FN(kvm_pgtable_stage2_test_clear_young)(kvm->arch.mmu.pgt, range->start << PAGE_SHIFT, @@ -2237,12 +2237,12 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_ra= nge *range) */ } =20 -bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) +int kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { u64 size =3D (range->end - range->start) << PAGE_SHIFT; =20 if (!kvm->arch.mmu.pgt) - return false; + return 0; =20 return KVM_PGT_FN(kvm_pgtable_stage2_test_clear_young)(kvm->arch.mmu.pgt, range->start << PAGE_SHIFT, diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c index a7fa458e3360..06fa060878c9 100644 --- a/arch/loongarch/kvm/mmu.c +++ b/arch/loongarch/kvm/mmu.c @@ -511,7 +511,7 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gf= n_range *range) range->end << PAGE_SHIFT, &ctx); } =20 -bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) +int kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { kvm_ptw_ctx ctx; =20 @@ -523,15 +523,15 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_rang= e *range) range->end << PAGE_SHIFT, &ctx); } =20 -bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) +int kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { gpa_t gpa =3D range->start << PAGE_SHIFT; kvm_pte_t *ptep =3D kvm_populate_gpa(kvm, NULL, gpa, 0); =20 if (ptep && kvm_pte_present(NULL, ptep) && kvm_pte_young(*ptep)) - return true; + return 1; =20 - return false; + return 0; } =20 /* diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c index d2c3b6b41f18..c26cc89c8e98 100644 --- a/arch/mips/kvm/mmu.c +++ b/arch/mips/kvm/mmu.c @@ -444,18 +444,18 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_= gfn_range *range) return true; } =20 -bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) +int kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { return kvm_mips_mkold_gpa_pt(kvm, range->start, range->end); } =20 -bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) +int kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { gpa_t gpa =3D range->start << PAGE_SHIFT; pte_t *gpa_pte =3D kvm_mips_pte_for_gpa(kvm, NULL, gpa); =20 if (!gpa_pte) - return false; + return 0; return pte_young(*gpa_pte); } =20 diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c index d79c5d1098c0..9bf6e1cf64f1 100644 --- a/arch/powerpc/kvm/book3s.c +++ b/arch/powerpc/kvm/book3s.c @@ -886,12 +886,12 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_= gfn_range *range) return kvm->arch.kvm_ops->unmap_gfn_range(kvm, range); } =20 -bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) +int kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { return kvm->arch.kvm_ops->age_gfn(kvm, range); } =20 -bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) +int kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { return kvm->arch.kvm_ops->test_age_gfn(kvm, range); } diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_h= ost.c index 06caf8bbbe2b..dd5411ee242e 100644 --- a/arch/powerpc/kvm/e500_mmu_host.c +++ b/arch/powerpc/kvm/e500_mmu_host.c @@ -697,16 +697,16 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_= gfn_range *range) return kvm_e500_mmu_unmap_gfn(kvm, range); } =20 -bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) +int kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { /* XXX could be more clever ;) */ - return false; + return 0; } =20 -bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) +int kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { /* XXX could be more clever ;) */ - return false; + return 0; } =20 /*****************************************/ diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index 4ab06697bfc0..aa163d2ef7d5 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -259,7 +259,7 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gf= n_range *range) return false; } =20 -bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) +int kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { pte_t *ptep; u32 ptep_level =3D 0; @@ -267,7 +267,7 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range = *range) struct kvm_gstage gstage; =20 if (!kvm->arch.pgd) - return false; + return 0; =20 WARN_ON(size !=3D PAGE_SIZE && size !=3D PMD_SIZE && size !=3D PUD_SIZE); =20 @@ -277,12 +277,12 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_rang= e *range) gstage.pgd =3D kvm->arch.pgd; if (!kvm_riscv_gstage_get_leaf(&gstage, range->start << PAGE_SHIFT, &ptep, &ptep_level)) - return false; + return 0; =20 return ptep_test_and_clear_young(NULL, 0, ptep); } =20 -bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) +int kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { pte_t *ptep; u32 ptep_level =3D 0; @@ -290,7 +290,7 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_r= ange *range) struct kvm_gstage gstage; =20 if (!kvm->arch.pgd) - return false; + return 0; =20 WARN_ON(size !=3D PAGE_SIZE && size !=3D PMD_SIZE && size !=3D PUD_SIZE); =20 @@ -300,7 +300,7 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_r= ange *range) gstage.pgd =3D kvm->arch.pgd; if (!kvm_riscv_gstage_get_leaf(&gstage, range->start << PAGE_SHIFT, &ptep, &ptep_level)) - return false; + return 0; =20 return pte_young(ptep_get(ptep)); } diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 029f2f272ffc..1b180279aacd 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1810,7 +1810,7 @@ static bool kvm_may_have_shadow_mmu_sptes(struct kvm = *kvm) return !tdp_mmu_enabled || READ_ONCE(kvm->arch.indirect_shadow_pages); } =20 -bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) +int kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { bool young =3D false; =20 @@ -1823,7 +1823,7 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_rang= e *range) return young; } =20 -bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) +int kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { bool young =3D false; =20 @@ -7962,8 +7962,8 @@ static void hugepage_set_mixed(struct kvm_memory_slot= *slot, gfn_t gfn, lpage_info_slot(gfn, slot, level)->disallow_lpage |=3D KVM_LPAGE_MIXED_FL= AG; } =20 -bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm, - struct kvm_gfn_range *range) +int kvm_arch_pre_set_memory_attributes(struct kvm *kvm, + struct kvm_gfn_range *range) { struct kvm_memory_slot *slot =3D range->slot; int level; @@ -7980,10 +7980,10 @@ bool kvm_arch_pre_set_memory_attributes(struct kvm = *kvm, * a hugepage can be used for affected ranges. */ if (WARN_ON_ONCE(!kvm_arch_has_private_mem(kvm))) - return false; + return 0; =20 if (WARN_ON_ONCE(range->end <=3D range->start)) - return false; + return 0; =20 /* * If the head and tail pages of the range currently allow a hugepage, @@ -8042,8 +8042,8 @@ static bool hugepage_has_attrs(struct kvm *kvm, struc= t kvm_memory_slot *slot, return true; } =20 -bool kvm_arch_post_set_memory_attributes(struct kvm *kvm, - struct kvm_gfn_range *range) +int kvm_arch_post_set_memory_attributes(struct kvm *kvm, + struct kvm_gfn_range *range) { unsigned long attrs =3D range->arg.attributes; struct kvm_memory_slot *slot =3D range->slot; @@ -8059,7 +8059,7 @@ bool kvm_arch_post_set_memory_attributes(struct kvm *= kvm, * SHARED may now allow hugepages. */ if (WARN_ON_ONCE(!kvm_arch_has_private_mem(kvm))) - return false; + return 0; =20 /* * The sequence matters here: upper levels consume the result of lower @@ -8106,7 +8106,7 @@ bool kvm_arch_post_set_memory_attributes(struct kvm *= kvm, hugepage_set_mixed(slot, gfn, level); } } - return false; + return 0; } =20 void kvm_mmu_init_memslot_memory_attributes(struct kvm *kvm, diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index e563bb22c481..6f3d29db0505 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -273,8 +273,8 @@ struct kvm_gfn_range { bool lockless; }; bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range); -bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range); -bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range); +int kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range); +int kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range); int kvm_split_cross_boundary_leafs(struct kvm *kvm, struct kvm_gfn_range *= range, bool shared); #endif @@ -734,10 +734,10 @@ static inline bool kvm_arch_has_private_mem(struct kv= m *kvm) extern bool vm_memory_attributes; bool kvm_range_has_vm_memory_attributes(struct kvm *kvm, gfn_t start, gfn_= t end, unsigned long mask, unsigned long attrs); -bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm, +int kvm_arch_pre_set_memory_attributes(struct kvm *kvm, + struct kvm_gfn_range *range); +int kvm_arch_post_set_memory_attributes(struct kvm *kvm, struct kvm_gfn_range *range); -bool kvm_arch_post_set_memory_attributes(struct kvm *kvm, - struct kvm_gfn_range *range); #else #define vm_memory_attributes false #endif /* CONFIG_KVM_VM_MEMORY_ATTRIBUTES */ @@ -1568,7 +1568,7 @@ void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memor= y_cache *mc); void kvm_mmu_invalidate_begin(struct kvm *kvm); void kvm_mmu_invalidate_range_add(struct kvm *kvm, gfn_t start, gfn_t end); void kvm_mmu_invalidate_end(struct kvm *kvm); -bool kvm_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range); +int kvm_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range); =20 long kvm_arch_dev_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index feeef7747099..471f798dba2d 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -517,7 +517,7 @@ static inline struct kvm *mmu_notifier_to_kvm(struct mm= u_notifier *mn) return container_of(mn, struct kvm, mmu_notifier); } =20 -typedef bool (*gfn_handler_t)(struct kvm *kvm, struct kvm_gfn_range *range= ); +typedef int (*gfn_handler_t)(struct kvm *kvm, struct kvm_gfn_range *range); =20 typedef void (*on_lock_fn_t)(struct kvm *kvm); =20 @@ -601,6 +601,7 @@ static __always_inline kvm_mn_ret_t kvm_handle_hva_rang= e(struct kvm *kvm, kvm_for_each_memslot_in_hva_range(node, slots, range->start, range->end - 1) { unsigned long hva_start, hva_end; + int ret; =20 slot =3D container_of(node, struct kvm_memory_slot, hva_node[slots->nod= e_idx]); hva_start =3D max_t(unsigned long, range->start, slot->userspace_addr); @@ -641,7 +642,9 @@ static __always_inline kvm_mn_ret_t kvm_handle_hva_rang= e(struct kvm *kvm, goto mmu_unlock; } } - r.ret |=3D range->handler(kvm, &gfn_range); + ret =3D range->handler(kvm, &gfn_range); + WARN_ON_ONCE(ret < 0); + r.ret |=3D ret; } } =20 @@ -727,7 +730,7 @@ void kvm_mmu_invalidate_range_add(struct kvm *kvm, gfn_= t start, gfn_t end) } } =20 -bool kvm_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) +int kvm_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) { kvm_mmu_invalidate_range_add(kvm, range->start, range->end); return kvm_unmap_gfn_range(kvm, range); @@ -2507,7 +2510,8 @@ static __always_inline void kvm_handle_gfn_range(stru= ct kvm *kvm, struct kvm_memslots *slots; struct kvm_memslot_iter iter; bool found_memslot =3D false; - bool ret =3D false; + bool flush =3D false; + int ret =3D 0; int i; =20 gfn_range.arg =3D range->arg; @@ -2540,19 +2544,23 @@ static __always_inline void kvm_handle_gfn_range(st= ruct kvm *kvm, range->on_lock(kvm); } =20 - ret |=3D range->handler(kvm, &gfn_range); + ret =3D range->handler(kvm, &gfn_range); + if (ret < 0) + goto err; + flush |=3D ret; } } =20 - if (range->flush_on_ret && ret) +err: + if (range->flush_on_ret && flush) kvm_flush_remote_tlbs(kvm); =20 if (found_memslot) KVM_MMU_UNLOCK(kvm); } =20 -static bool kvm_pre_set_memory_attributes(struct kvm *kvm, - struct kvm_gfn_range *range) +static int kvm_pre_set_memory_attributes(struct kvm *kvm, + struct kvm_gfn_range *range) { /* * Unconditionally add the range to the invalidation set, regardless of --=20 2.43.2