From nobody Fri Dec 19 04:53:40 2025 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B70E11FF60D for ; Wed, 18 Dec 2024 19:41:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734550905; cv=none; b=hHC31cZD/dVLEYpycFnuIUuM9ummKLg1j2izMHXbMDfHx8nel3Quzp89Am4S7SKPAqA1YRN1ILDnVNiQihzB6AR1HnIfkSx7z53Sx4MTh5WtXAlyK1Xzj8eRMgGAc/T3F8GhuAmzOI1zaMiRrzhrH8smwTgMgcTCdC5S7ex3hXc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734550905; c=relaxed/simple; bh=EdWS/cM8rpDng5Kp13NdSXMWPacjmnHjsdgRhxMe9r4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=k3VL4knm0IoUnFUotbHxqHSHKEL1jEeAXcjUKsznCX9p+/LmQIZ3xdAymmTgRDe1+m90mjC5RbRjRCGDJSwr1It0IsTVKbDkogN/3UVCbAFYZOeMdVNUHJXZWH9L3bVpZTn0M4aSMfi9WihECRxBrY2m5CFFnzeohfVODZul6I8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=O1MERnOA; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="O1MERnOA" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-5d3fe991854so14316a12.0 for ; Wed, 18 Dec 2024 11:41:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734550902; x=1735155702; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Vrp18CDHZ6Mn1scEH2wlnFhiKHJPrKlVzJRYJ5Umd4I=; b=O1MERnOAIrztnvLnTgdvL4OlbTb70+Dz//IVCOUpdXoR3UPrXP0NzO9PWRIsFnApb8 +TkWiRhAoLF0/HeHet5g73HSeSZsKw2TSg3R6iLOoerUiazZ8Y3uPAeXisOAP4Y++sS8 BscW4usClhiVz1Tl8/t5CuFdJY7qSOJDWoiidb7EtHGqtVHRToO36OJzjZjCiupE4fG3 2c+7sgCE0ziFjGW6IMzcrZ/rtEuYNuPafu3JrbUyI7213zxH1jl5zGzm6Lyz+5eU4vdR W/uzdbjTeoLO1JkHxtbDR05QT6+N6nAqcEY6SNg5rHp256J/XUbunlN1nXEGnxaRUgoA /OgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734550902; x=1735155702; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Vrp18CDHZ6Mn1scEH2wlnFhiKHJPrKlVzJRYJ5Umd4I=; b=WMvEOtJC+iXdX142nQmZaxdMmBxg55H9Fusn3o/W0rsoERPJAzNXYajiPFkiwp6idJ 8SpoEE9jB7BZB5VkI4XucJtkUP/YUjZK0D5a4avQ7hM9DAYboUyQwOU2JgJxSMwjBDt3 JfQl3UQLEZqVyijsi4uTbQASulpHvJWfsl0h+CgKkHk3TOXIRWTgcQHw3RxNTTIJjcpK 2DckO+Ama8pHR8cxS94Usei2lVvW4qzpBaKk92gJEKG45vgVEIdTLgb2TKSivpCX/ksg 3WRCrwsa865UxdN3fck70Jh9osnDabeOWnuEnJ5zSehij7cMi4SanT0C+xalZpLh0Y9c AWKg== X-Forwarded-Encrypted: i=1; AJvYcCUKvWWmkXkAP88S+OCyr8ryr8yk0Uwl+Ce5plO0SDoMe6sIy37C1YOdzDnWOE2Ikeci8vK7LqMuQKIbBuo=@vger.kernel.org X-Gm-Message-State: AOJu0Yzi9niFfsGfUAELRSynaUNvog/PadIKskURDUAjnecFxgPKLiUR hB2mnCNJfeRZQq8+xi/SSwNcezHtqR7KJ//dPLiku+FYGoVXst/pFU5x7p1QF9KdZGhCQe34w5B t12z5Lg== X-Google-Smtp-Source: AGHT+IE/bh/YBSK6Nzso5i2wcnxVL5uqB8UYSDaBDVMvQfLpUj7+lL77OQ9xcJ56VGz3+7xUpwFFrCHSkRHC X-Received: from edbek21.prod.google.com ([2002:a05:6402:3715:b0:5d7:c7b4:8772]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:51cb:b0:5d1:22c2:6c56 with SMTP id 4fb4d7f45d1cf-5d7ee3b4b98mr3413543a12.17.1734550902170; Wed, 18 Dec 2024 11:41:42 -0800 (PST) Date: Wed, 18 Dec 2024 19:40:59 +0000 In-Reply-To: <20241218194059.3670226-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241218194059.3670226-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241218194059.3670226-19-qperret@google.com> Subject: [PATCH v4 18/18] KVM: arm64: Plumb the pKVM MMU in KVM From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce the KVM_PGT_CALL() helper macro to allow switching from the traditional pgtable code to the pKVM version easily in mmu.c. The cost of this 'indirection' is expected to be very minimal due to is_protected_kvm_enabled() being backed by a static key. With this, everything is in place to allow the delegation of non-protected guest stage-2 page-tables to pKVM, so let's stop using the host's kvm_s2_mmu from EL2 and enjoy the ride. Tested-by: Fuad Tabba Reviewed-by: Fuad Tabba Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_mmu.h | 16 ++++++ arch/arm64/kvm/arm.c | 9 +++- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 2 - arch/arm64/kvm/mmu.c | 87 ++++++++++++++++++++---------- 4 files changed, 82 insertions(+), 32 deletions(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_= mmu.h index 66d93e320ec8..d116ab4230e8 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -353,6 +353,22 @@ static inline bool kvm_is_nested_s2_mmu(struct kvm *kv= m, struct kvm_s2_mmu *mmu) return &kvm->arch.mmu !=3D mmu; } =20 +static inline void kvm_fault_lock(struct kvm *kvm) +{ + if (is_protected_kvm_enabled()) + write_lock(&kvm->mmu_lock); + else + read_lock(&kvm->mmu_lock); +} + +static inline void kvm_fault_unlock(struct kvm *kvm) +{ + if (is_protected_kvm_enabled()) + write_unlock(&kvm->mmu_lock); + else + read_unlock(&kvm->mmu_lock); +} + #ifdef CONFIG_PTDUMP_STAGE2_DEBUGFS void kvm_s2_ptdump_create_debugfs(struct kvm *kvm); #else diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 55cc62b2f469..9bcbc7b8ed38 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -502,7 +502,10 @@ void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu) =20 void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) { - kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); + if (!is_protected_kvm_enabled()) + kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); + else + free_hyp_memcache(&vcpu->arch.pkvm_memcache); kvm_timer_vcpu_terminate(vcpu); kvm_pmu_vcpu_destroy(vcpu); kvm_vgic_vcpu_destroy(vcpu); @@ -574,6 +577,9 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) struct kvm_s2_mmu *mmu; int *last_ran; =20 + if (is_protected_kvm_enabled()) + goto nommu; + if (vcpu_has_nv(vcpu)) kvm_vcpu_load_hw_mmu(vcpu); =20 @@ -594,6 +600,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) *last_ran =3D vcpu->vcpu_idx; } =20 +nommu: vcpu->cpu =3D cpu; =20 kvm_vgic_load(vcpu); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 130f5f23bcb5..258d572eed62 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -103,8 +103,6 @@ static void flush_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vc= pu) /* Limit guest vector length to the maximum supported by the host. */ hyp_vcpu->vcpu.arch.sve_max_vl =3D min(host_vcpu->arch.sve_max_vl, kvm_ho= st_sve_max_vl); =20 - hyp_vcpu->vcpu.arch.hw_mmu =3D host_vcpu->arch.hw_mmu; - hyp_vcpu->vcpu.arch.mdcr_el2 =3D host_vcpu->arch.mdcr_el2; hyp_vcpu->vcpu.arch.hcr_el2 &=3D ~(HCR_TWI | HCR_TWE); hyp_vcpu->vcpu.arch.hcr_el2 |=3D READ_ONCE(host_vcpu->arch.hcr_el2) & diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 641e4fec1659..9403524c11c6 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include @@ -31,6 +32,8 @@ static phys_addr_t __ro_after_init hyp_idmap_vector; =20 static unsigned long __ro_after_init io_map_base; =20 +#define KVM_PGT_FN(fn) (!is_protected_kvm_enabled() ? fn : p ## fn) + static phys_addr_t __stage2_range_addr_end(phys_addr_t addr, phys_addr_t e= nd, phys_addr_t size) { @@ -147,7 +150,7 @@ static int kvm_mmu_split_huge_pages(struct kvm *kvm, ph= ys_addr_t addr, return -EINVAL; =20 next =3D __stage2_range_addr_end(addr, end, chunk_size); - ret =3D kvm_pgtable_stage2_split(pgt, addr, next - addr, cache); + ret =3D KVM_PGT_FN(kvm_pgtable_stage2_split)(pgt, addr, next - addr, cac= he); if (ret) break; } while (addr =3D next, addr !=3D end); @@ -168,15 +171,23 @@ static bool memslot_is_logging(struct kvm_memory_slot= *memslot) */ int kvm_arch_flush_remote_tlbs(struct kvm *kvm) { - kvm_call_hyp(__kvm_tlb_flush_vmid, &kvm->arch.mmu); + if (is_protected_kvm_enabled()) + kvm_call_hyp_nvhe(__pkvm_tlb_flush_vmid, kvm->arch.pkvm.handle); + else + kvm_call_hyp(__kvm_tlb_flush_vmid, &kvm->arch.mmu); return 0; } =20 int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 nr_pages) { - kvm_tlb_flush_vmid_range(&kvm->arch.mmu, - gfn << PAGE_SHIFT, nr_pages << PAGE_SHIFT); + u64 size =3D nr_pages << PAGE_SHIFT; + u64 addr =3D gfn << PAGE_SHIFT; + + if (is_protected_kvm_enabled()) + kvm_call_hyp_nvhe(__pkvm_tlb_flush_vmid, kvm->arch.pkvm.handle); + else + kvm_tlb_flush_vmid_range(&kvm->arch.mmu, addr, size); return 0; } =20 @@ -225,7 +236,7 @@ static void stage2_free_unlinked_table_rcu_cb(struct rc= u_head *head) void *pgtable =3D page_to_virt(page); s8 level =3D page_private(page); =20 - kvm_pgtable_stage2_free_unlinked(&kvm_s2_mm_ops, pgtable, level); + KVM_PGT_FN(kvm_pgtable_stage2_free_unlinked)(&kvm_s2_mm_ops, pgtable, lev= el); } =20 static void stage2_free_unlinked_table(void *addr, s8 level) @@ -324,7 +335,7 @@ static void __unmap_stage2_range(struct kvm_s2_mmu *mmu= , phys_addr_t start, u64 =20 lockdep_assert_held_write(&kvm->mmu_lock); WARN_ON(size & ~PAGE_MASK); - WARN_ON(stage2_apply_range(mmu, start, end, kvm_pgtable_stage2_unmap, + WARN_ON(stage2_apply_range(mmu, start, end, KVM_PGT_FN(kvm_pgtable_stage2= _unmap), may_block)); } =20 @@ -336,7 +347,7 @@ void kvm_stage2_unmap_range(struct kvm_s2_mmu *mmu, phy= s_addr_t start, =20 void kvm_stage2_flush_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys= _addr_t end) { - stage2_apply_range_resched(mmu, addr, end, kvm_pgtable_stage2_flush); + stage2_apply_range_resched(mmu, addr, end, KVM_PGT_FN(kvm_pgtable_stage2_= flush)); } =20 static void stage2_flush_memslot(struct kvm *kvm, @@ -942,10 +953,14 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s= 2_mmu *mmu, unsigned long t return -ENOMEM; =20 mmu->arch =3D &kvm->arch; - err =3D kvm_pgtable_stage2_init(pgt, mmu, &kvm_s2_mm_ops); + err =3D KVM_PGT_FN(kvm_pgtable_stage2_init)(pgt, mmu, &kvm_s2_mm_ops); if (err) goto out_free_pgtable; =20 + mmu->pgt =3D pgt; + if (is_protected_kvm_enabled()) + return 0; + mmu->last_vcpu_ran =3D alloc_percpu(typeof(*mmu->last_vcpu_ran)); if (!mmu->last_vcpu_ran) { err =3D -ENOMEM; @@ -959,7 +974,6 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_= mmu *mmu, unsigned long t mmu->split_page_chunk_size =3D KVM_ARM_EAGER_SPLIT_CHUNK_SIZE_DEFAULT; mmu->split_page_cache.gfp_zero =3D __GFP_ZERO; =20 - mmu->pgt =3D pgt; mmu->pgd_phys =3D __pa(pgt->pgd); =20 if (kvm_is_nested_s2_mmu(kvm, mmu)) @@ -968,7 +982,7 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_= mmu *mmu, unsigned long t return 0; =20 out_destroy_pgtable: - kvm_pgtable_stage2_destroy(pgt); + KVM_PGT_FN(kvm_pgtable_stage2_destroy)(pgt); out_free_pgtable: kfree(pgt); return err; @@ -1065,7 +1079,7 @@ void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu) write_unlock(&kvm->mmu_lock); =20 if (pgt) { - kvm_pgtable_stage2_destroy(pgt); + KVM_PGT_FN(kvm_pgtable_stage2_destroy)(pgt); kfree(pgt); } } @@ -1082,9 +1096,11 @@ static void *hyp_mc_alloc_fn(void *unused) =20 void free_hyp_memcache(struct kvm_hyp_memcache *mc) { - if (is_protected_kvm_enabled()) - __free_hyp_memcache(mc, hyp_mc_free_fn, - kvm_host_va, NULL); + if (!is_protected_kvm_enabled()) + return; + + kfree(mc->mapping); + __free_hyp_memcache(mc, hyp_mc_free_fn, kvm_host_va, NULL); } =20 int topup_hyp_memcache(struct kvm_hyp_memcache *mc, unsigned long min_page= s) @@ -1092,6 +1108,12 @@ int topup_hyp_memcache(struct kvm_hyp_memcache *mc, = unsigned long min_pages) if (!is_protected_kvm_enabled()) return 0; =20 + if (!mc->mapping) { + mc->mapping =3D kzalloc(sizeof(struct pkvm_mapping), GFP_KERNEL_ACCOUNT); + if (!mc->mapping) + return -ENOMEM; + } + return __topup_hyp_memcache(mc, min_pages, hyp_mc_alloc_fn, kvm_host_pa, NULL); } @@ -1130,8 +1152,8 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_= t guest_ipa, break; =20 write_lock(&kvm->mmu_lock); - ret =3D kvm_pgtable_stage2_map(pgt, addr, PAGE_SIZE, pa, prot, - &cache, 0); + ret =3D KVM_PGT_FN(kvm_pgtable_stage2_map)(pgt, addr, PAGE_SIZE, + pa, prot, &cache, 0); write_unlock(&kvm->mmu_lock); if (ret) break; @@ -1151,7 +1173,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_= t guest_ipa, */ void kvm_stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_ad= dr_t end) { - stage2_apply_range_resched(mmu, addr, end, kvm_pgtable_stage2_wrprotect); + stage2_apply_range_resched(mmu, addr, end, KVM_PGT_FN(kvm_pgtable_stage2_= wrprotect)); } =20 /** @@ -1442,9 +1464,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, unsigned long mmu_seq; phys_addr_t ipa =3D fault_ipa; struct kvm *kvm =3D vcpu->kvm; - struct kvm_mmu_memory_cache *memcache =3D &vcpu->arch.mmu_page_cache; struct vm_area_struct *vma; short vma_shift; + void *memcache; gfn_t gfn; kvm_pfn_t pfn; bool logging_active =3D memslot_is_logging(memslot); @@ -1472,8 +1494,15 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phy= s_addr_t fault_ipa, * and a write fault needs to collapse a block entry into a table. */ if (!fault_is_perm || (logging_active && write_fault)) { - ret =3D kvm_mmu_topup_memory_cache(memcache, - kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu)); + int min_pages =3D kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu); + + if (!is_protected_kvm_enabled()) { + memcache =3D &vcpu->arch.mmu_page_cache; + ret =3D kvm_mmu_topup_memory_cache(memcache, min_pages); + } else { + memcache =3D &vcpu->arch.pkvm_memcache; + ret =3D topup_hyp_memcache(memcache, min_pages); + } if (ret) return ret; } @@ -1494,7 +1523,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, * logging_active is guaranteed to never be true for VM_PFNMAP * memslots. */ - if (logging_active) { + if (logging_active || is_protected_kvm_enabled()) { force_pte =3D true; vma_shift =3D PAGE_SHIFT; } else { @@ -1634,7 +1663,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, prot |=3D kvm_encode_nested_level(nested); } =20 - read_lock(&kvm->mmu_lock); + kvm_fault_lock(kvm); pgt =3D vcpu->arch.hw_mmu->pgt; if (mmu_invalidate_retry(kvm, mmu_seq)) { ret =3D -EAGAIN; @@ -1696,16 +1725,16 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, ph= ys_addr_t fault_ipa, * PTE, which will be preserved. */ prot &=3D ~KVM_NV_GUEST_MAP_SZ; - ret =3D kvm_pgtable_stage2_relax_perms(pgt, fault_ipa, prot, flags); + ret =3D KVM_PGT_FN(kvm_pgtable_stage2_relax_perms)(pgt, fault_ipa, prot,= flags); } else { - ret =3D kvm_pgtable_stage2_map(pgt, fault_ipa, vma_pagesize, + ret =3D KVM_PGT_FN(kvm_pgtable_stage2_map)(pgt, fault_ipa, vma_pagesize, __pfn_to_phys(pfn), prot, memcache, flags); } =20 out_unlock: kvm_release_faultin_page(kvm, page, !!ret, writable); - read_unlock(&kvm->mmu_lock); + kvm_fault_unlock(kvm); =20 /* Mark the page dirty only if the fault is handled successfully */ if (writable && !ret) @@ -1724,7 +1753,7 @@ static void handle_access_fault(struct kvm_vcpu *vcpu= , phys_addr_t fault_ipa) =20 read_lock(&vcpu->kvm->mmu_lock); mmu =3D vcpu->arch.hw_mmu; - kvm_pgtable_stage2_mkyoung(mmu->pgt, fault_ipa, flags); + KVM_PGT_FN(kvm_pgtable_stage2_mkyoung)(mmu->pgt, fault_ipa, flags); read_unlock(&vcpu->kvm->mmu_lock); } =20 @@ -1764,7 +1793,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) } =20 /* Falls between the IPA range and the PARange? */ - if (fault_ipa >=3D BIT_ULL(vcpu->arch.hw_mmu->pgt->ia_bits)) { + if (fault_ipa >=3D BIT_ULL(VTCR_EL2_IPA(vcpu->arch.hw_mmu->vtcr))) { fault_ipa |=3D kvm_vcpu_get_hfar(vcpu) & GENMASK(11, 0); =20 if (is_iabt) @@ -1930,7 +1959,7 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_rang= e *range) if (!kvm->arch.mmu.pgt) return false; =20 - return kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt, + return KVM_PGT_FN(kvm_pgtable_stage2_test_clear_young)(kvm->arch.mmu.pgt, range->start << PAGE_SHIFT, size, true); /* @@ -1946,7 +1975,7 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn= _range *range) if (!kvm->arch.mmu.pgt) return false; =20 - return kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt, + return KVM_PGT_FN(kvm_pgtable_stage2_test_clear_young)(kvm->arch.mmu.pgt, range->start << PAGE_SHIFT, size, false); } --=20 2.47.1.613.gc27f4b7a9f-goog