From nobody Tue Feb 10 06:04:56 2026 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 01AA2211469 for ; Mon, 16 Dec 2024 17:58:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371927; cv=none; b=kmFZPi84vs5/FJs+/g35jDv02w7Va4qgaNUB0API2CGFUVvE3XBeM55YaL/STdBe83U1a1yXQ+WmAOlgRkWdwbbbd8zj5b86jlACMlpjatfLRqgXUvsBWj2loUU6M8YuHNIrHGNztcX00Uz7ffyocYf8/vbRDGZVpl9I8z7FDyw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371927; c=relaxed/simple; bh=FVmnx3tEtWToIEfDoX2uZSWUX5XEpJlCzoydyXN99aM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=sj4ZzR0LsCdi00vVadJGvGV7gTSoEkCKs+/bAkpusQAWUIt2v71QAbKpOhyCz+utW3SJ191uZe9E5Hi3y+J1nUVq0hbYSybjnXi1Myax/CYXB/ygnxY6CsEYG0ugJk2IMPCxNlPMWTqakUzR5fCa9rVmmv7HxU9PfBPrLz16n4U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=dfBKxES9; arc=none smtp.client-ip=209.85.218.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="dfBKxES9" Received: by mail-ej1-f74.google.com with SMTP id a640c23a62f3a-aa698b61931so316518866b.2 for ; Mon, 16 Dec 2024 09:58:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371923; x=1734976723; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=0WLd7b4UlDBftWKgCzWDkFinsnMNAvtsNvXfdB6R70k=; b=dfBKxES9QQEpvTjccFedG/TZjwu1QmbRiVO2jEkei3U771Gmwx9qXcBsDjXmNUGkMV OoXyf3AotPuZA/f1+s2E/65DgJLNPmimyKRrVjymRI43XA3awkcuWp+Q4ljupMISjq4y 70gjVo6xk7txCaPJqhgb9k9tFRDyAtt+XzdDQFZkyiKJsvOJBE408wDDNgknaAmZwQY5 9xvoXF+3qvzXsGBj9OqgVw3zDS90oUUpUgqJMv4uXVqB9/XO5SkaVey8d9kSVABd31xo 0Fqjji+bffK3oYELQJPKr27WYQl2qoT/8uu992EhGMcxADE59XAVJJfM2xvThGyZRMHz YwNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371923; x=1734976723; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0WLd7b4UlDBftWKgCzWDkFinsnMNAvtsNvXfdB6R70k=; b=J2h0E1sS+X8z2HBPFYGqeujaBjjt6R8DksdAQ/5s7sLghDNU/IV0ljkGlNsD+r6lIC 3vne3Jc2Xs9DtyGaSOlzoGs0z96pTc2yMn7laTv3RFQ+Q3MdSubcBtkvaahviQRRFzX/ vCSCj4U4n+owKyd8N/1NHJ+FT6K55hBNQGzMkvAyVDs1Rg8ZZZS1ZtzlrxJPJzt+SMyx eqHjsxY90SlsO5BO1CP6E1N2ORiPOjb9O95kHrtqUWAXBjhpPcP1YWnKmmeluF1bq35W xR8S2hzBXf6Qq5UpHlKejlmoU9eUKmbbWPExrhn3cqufRRnipr2K3L2LMTTuGwOxYwud Xbuw== X-Forwarded-Encrypted: i=1; AJvYcCUL1zYyOYxIiYEgyJyKFeuIojU+ioWc2Gqx/EuyQ6XTfDkzv/zfv1xGeuTrAkMApG6X7QQZlDk6AHoRo0Q=@vger.kernel.org X-Gm-Message-State: AOJu0Yx3ImsB2rVvlVGtj//3Gnb5FJAGDV+lRjSgIp6bC9K1H/MMPCoW kgEGg4VDBN5kKY24J4VXsOtTYADJjbUnCY6lHGJMdHH/Yey6ifiSzDzgokAWLVp5d5mM7V5urcg QgSKYow== X-Google-Smtp-Source: AGHT+IHKLg6kMvyS5z7roX/cFF+XVRsVkjcGlNEAY1Fj07XnCutqrkNlyaRElEPNtXSoIn6/RJ2zk88JSi3I X-Received: from ejcvg16.prod.google.com ([2002:a17:907:d310:b0:aab:d747:ee70]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:72c6:b0:aa5:2a57:1779 with SMTP id a640c23a62f3a-aab77eecc9cmr1409090666b.59.1734371923561; Mon, 16 Dec 2024 09:58:43 -0800 (PST) Date: Mon, 16 Dec 2024 17:58:03 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-19-qperret@google.com> Subject: [PATCH v3 18/18] KVM: arm64: Plumb the pKVM MMU in KVM From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce the KVM_PGT_S2() helper macro to allow switching from the traditional pgtable code to the pKVM version easily in mmu.c. The cost of this 'indirection' is expected to be very minimal due to is_protected_kvm_enabled() being backed by a static key. With this, everything is in place to allow the delegation of non-protected guest stage-2 page-tables to pKVM, so let's stop using the host's kvm_s2_mmu from EL2 and enjoy the ride. Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba --- arch/arm64/include/asm/kvm_mmu.h | 16 +++++ arch/arm64/kvm/arm.c | 9 ++- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 2 - arch/arm64/kvm/mmu.c | 107 +++++++++++++++++++++-------- 4 files changed, 101 insertions(+), 33 deletions(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_= mmu.h index 66d93e320ec8..d116ab4230e8 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -353,6 +353,22 @@ static inline bool kvm_is_nested_s2_mmu(struct kvm *kv= m, struct kvm_s2_mmu *mmu) return &kvm->arch.mmu !=3D mmu; } =20 +static inline void kvm_fault_lock(struct kvm *kvm) +{ + if (is_protected_kvm_enabled()) + write_lock(&kvm->mmu_lock); + else + read_lock(&kvm->mmu_lock); +} + +static inline void kvm_fault_unlock(struct kvm *kvm) +{ + if (is_protected_kvm_enabled()) + write_unlock(&kvm->mmu_lock); + else + read_unlock(&kvm->mmu_lock); +} + #ifdef CONFIG_PTDUMP_STAGE2_DEBUGFS void kvm_s2_ptdump_create_debugfs(struct kvm *kvm); #else diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 55cc62b2f469..9bcbc7b8ed38 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -502,7 +502,10 @@ void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu) =20 void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) { - kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); + if (!is_protected_kvm_enabled()) + kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); + else + free_hyp_memcache(&vcpu->arch.pkvm_memcache); kvm_timer_vcpu_terminate(vcpu); kvm_pmu_vcpu_destroy(vcpu); kvm_vgic_vcpu_destroy(vcpu); @@ -574,6 +577,9 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) struct kvm_s2_mmu *mmu; int *last_ran; =20 + if (is_protected_kvm_enabled()) + goto nommu; + if (vcpu_has_nv(vcpu)) kvm_vcpu_load_hw_mmu(vcpu); =20 @@ -594,6 +600,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) *last_ran =3D vcpu->vcpu_idx; } =20 +nommu: vcpu->cpu =3D cpu; =20 kvm_vgic_load(vcpu); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 130f5f23bcb5..258d572eed62 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -103,8 +103,6 @@ static void flush_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vc= pu) /* Limit guest vector length to the maximum supported by the host. */ hyp_vcpu->vcpu.arch.sve_max_vl =3D min(host_vcpu->arch.sve_max_vl, kvm_ho= st_sve_max_vl); =20 - hyp_vcpu->vcpu.arch.hw_mmu =3D host_vcpu->arch.hw_mmu; - hyp_vcpu->vcpu.arch.mdcr_el2 =3D host_vcpu->arch.mdcr_el2; hyp_vcpu->vcpu.arch.hcr_el2 &=3D ~(HCR_TWI | HCR_TWE); hyp_vcpu->vcpu.arch.hcr_el2 |=3D READ_ONCE(host_vcpu->arch.hcr_el2) & diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 641e4fec1659..7c2995cb4577 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include @@ -31,6 +32,14 @@ static phys_addr_t __ro_after_init hyp_idmap_vector; =20 static unsigned long __ro_after_init io_map_base; =20 +#define KVM_PGT_S2(fn, ...) \ + ({ \ + typeof(kvm_pgtable_stage2_ ## fn) *__fn =3D kvm_pgtable_stage2_ ## fn; \ + if (is_protected_kvm_enabled()) \ + __fn =3D pkvm_pgtable_ ## fn; \ + __fn(__VA_ARGS__); \ + }) + static phys_addr_t __stage2_range_addr_end(phys_addr_t addr, phys_addr_t e= nd, phys_addr_t size) { @@ -147,7 +156,7 @@ static int kvm_mmu_split_huge_pages(struct kvm *kvm, ph= ys_addr_t addr, return -EINVAL; =20 next =3D __stage2_range_addr_end(addr, end, chunk_size); - ret =3D kvm_pgtable_stage2_split(pgt, addr, next - addr, cache); + ret =3D KVM_PGT_S2(split, pgt, addr, next - addr, cache); if (ret) break; } while (addr =3D next, addr !=3D end); @@ -168,15 +177,23 @@ static bool memslot_is_logging(struct kvm_memory_slot= *memslot) */ int kvm_arch_flush_remote_tlbs(struct kvm *kvm) { - kvm_call_hyp(__kvm_tlb_flush_vmid, &kvm->arch.mmu); + if (is_protected_kvm_enabled()) + kvm_call_hyp_nvhe(__pkvm_tlb_flush_vmid, kvm->arch.pkvm.handle); + else + kvm_call_hyp(__kvm_tlb_flush_vmid, &kvm->arch.mmu); return 0; } =20 int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 nr_pages) { - kvm_tlb_flush_vmid_range(&kvm->arch.mmu, - gfn << PAGE_SHIFT, nr_pages << PAGE_SHIFT); + u64 size =3D nr_pages << PAGE_SHIFT; + u64 addr =3D gfn << PAGE_SHIFT; + + if (is_protected_kvm_enabled()) + kvm_call_hyp_nvhe(__pkvm_tlb_flush_vmid, kvm->arch.pkvm.handle); + else + kvm_tlb_flush_vmid_range(&kvm->arch.mmu, addr, size); return 0; } =20 @@ -225,7 +242,7 @@ static void stage2_free_unlinked_table_rcu_cb(struct rc= u_head *head) void *pgtable =3D page_to_virt(page); s8 level =3D page_private(page); =20 - kvm_pgtable_stage2_free_unlinked(&kvm_s2_mm_ops, pgtable, level); + KVM_PGT_S2(free_unlinked, &kvm_s2_mm_ops, pgtable, level); } =20 static void stage2_free_unlinked_table(void *addr, s8 level) @@ -280,6 +297,11 @@ static void invalidate_icache_guest_page(void *va, siz= e_t size) __invalidate_icache_guest_page(va, size); } =20 +static int kvm_s2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + return KVM_PGT_S2(unmap, pgt, addr, size); +} + /* * Unmapping vs dcache management: * @@ -324,8 +346,7 @@ static void __unmap_stage2_range(struct kvm_s2_mmu *mmu= , phys_addr_t start, u64 =20 lockdep_assert_held_write(&kvm->mmu_lock); WARN_ON(size & ~PAGE_MASK); - WARN_ON(stage2_apply_range(mmu, start, end, kvm_pgtable_stage2_unmap, - may_block)); + WARN_ON(stage2_apply_range(mmu, start, end, kvm_s2_unmap, may_block)); } =20 void kvm_stage2_unmap_range(struct kvm_s2_mmu *mmu, phys_addr_t start, @@ -334,9 +355,14 @@ void kvm_stage2_unmap_range(struct kvm_s2_mmu *mmu, ph= ys_addr_t start, __unmap_stage2_range(mmu, start, size, may_block); } =20 +static int kvm_s2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + return KVM_PGT_S2(flush, pgt, addr, size); +} + void kvm_stage2_flush_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys= _addr_t end) { - stage2_apply_range_resched(mmu, addr, end, kvm_pgtable_stage2_flush); + stage2_apply_range_resched(mmu, addr, end, kvm_s2_flush); } =20 static void stage2_flush_memslot(struct kvm *kvm, @@ -942,10 +968,14 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s= 2_mmu *mmu, unsigned long t return -ENOMEM; =20 mmu->arch =3D &kvm->arch; - err =3D kvm_pgtable_stage2_init(pgt, mmu, &kvm_s2_mm_ops); + err =3D KVM_PGT_S2(init, pgt, mmu, &kvm_s2_mm_ops); if (err) goto out_free_pgtable; =20 + mmu->pgt =3D pgt; + if (is_protected_kvm_enabled()) + return 0; + mmu->last_vcpu_ran =3D alloc_percpu(typeof(*mmu->last_vcpu_ran)); if (!mmu->last_vcpu_ran) { err =3D -ENOMEM; @@ -959,7 +989,6 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_= mmu *mmu, unsigned long t mmu->split_page_chunk_size =3D KVM_ARM_EAGER_SPLIT_CHUNK_SIZE_DEFAULT; mmu->split_page_cache.gfp_zero =3D __GFP_ZERO; =20 - mmu->pgt =3D pgt; mmu->pgd_phys =3D __pa(pgt->pgd); =20 if (kvm_is_nested_s2_mmu(kvm, mmu)) @@ -968,7 +997,7 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_= mmu *mmu, unsigned long t return 0; =20 out_destroy_pgtable: - kvm_pgtable_stage2_destroy(pgt); + KVM_PGT_S2(destroy, pgt); out_free_pgtable: kfree(pgt); return err; @@ -1065,7 +1094,7 @@ void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu) write_unlock(&kvm->mmu_lock); =20 if (pgt) { - kvm_pgtable_stage2_destroy(pgt); + KVM_PGT_S2(destroy, pgt); kfree(pgt); } } @@ -1082,9 +1111,11 @@ static void *hyp_mc_alloc_fn(void *unused) =20 void free_hyp_memcache(struct kvm_hyp_memcache *mc) { - if (is_protected_kvm_enabled()) - __free_hyp_memcache(mc, hyp_mc_free_fn, - kvm_host_va, NULL); + if (!is_protected_kvm_enabled()) + return; + + kfree(mc->mapping); + __free_hyp_memcache(mc, hyp_mc_free_fn, kvm_host_va, NULL); } =20 int topup_hyp_memcache(struct kvm_hyp_memcache *mc, unsigned long min_page= s) @@ -1092,6 +1123,12 @@ int topup_hyp_memcache(struct kvm_hyp_memcache *mc, = unsigned long min_pages) if (!is_protected_kvm_enabled()) return 0; =20 + if (!mc->mapping) { + mc->mapping =3D kzalloc(sizeof(struct pkvm_mapping), GFP_KERNEL_ACCOUNT); + if (!mc->mapping) + return -ENOMEM; + } + return __topup_hyp_memcache(mc, min_pages, hyp_mc_alloc_fn, kvm_host_pa, NULL); } @@ -1130,8 +1167,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_= t guest_ipa, break; =20 write_lock(&kvm->mmu_lock); - ret =3D kvm_pgtable_stage2_map(pgt, addr, PAGE_SIZE, pa, prot, - &cache, 0); + ret =3D KVM_PGT_S2(map, pgt, addr, PAGE_SIZE, pa, prot, &cache, 0); write_unlock(&kvm->mmu_lock); if (ret) break; @@ -1143,6 +1179,10 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr= _t guest_ipa, return ret; } =20 +static int kvm_s2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + return KVM_PGT_S2(wrprotect, pgt, addr, size); +} /** * kvm_stage2_wp_range() - write protect stage2 memory region range * @mmu: The KVM stage-2 MMU pointer @@ -1151,7 +1191,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_= t guest_ipa, */ void kvm_stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_ad= dr_t end) { - stage2_apply_range_resched(mmu, addr, end, kvm_pgtable_stage2_wrprotect); + stage2_apply_range_resched(mmu, addr, end, kvm_s2_wrprotect); } =20 /** @@ -1442,9 +1482,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, unsigned long mmu_seq; phys_addr_t ipa =3D fault_ipa; struct kvm *kvm =3D vcpu->kvm; - struct kvm_mmu_memory_cache *memcache =3D &vcpu->arch.mmu_page_cache; struct vm_area_struct *vma; short vma_shift; + void *memcache; gfn_t gfn; kvm_pfn_t pfn; bool logging_active =3D memslot_is_logging(memslot); @@ -1472,8 +1512,15 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phy= s_addr_t fault_ipa, * and a write fault needs to collapse a block entry into a table. */ if (!fault_is_perm || (logging_active && write_fault)) { - ret =3D kvm_mmu_topup_memory_cache(memcache, - kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu)); + int min_pages =3D kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu); + + if (!is_protected_kvm_enabled()) { + memcache =3D &vcpu->arch.mmu_page_cache; + ret =3D kvm_mmu_topup_memory_cache(memcache, min_pages); + } else { + memcache =3D &vcpu->arch.pkvm_memcache; + ret =3D topup_hyp_memcache(memcache, min_pages); + } if (ret) return ret; } @@ -1494,7 +1541,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, * logging_active is guaranteed to never be true for VM_PFNMAP * memslots. */ - if (logging_active) { + if (logging_active || is_protected_kvm_enabled()) { force_pte =3D true; vma_shift =3D PAGE_SHIFT; } else { @@ -1634,7 +1681,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, prot |=3D kvm_encode_nested_level(nested); } =20 - read_lock(&kvm->mmu_lock); + kvm_fault_lock(kvm); pgt =3D vcpu->arch.hw_mmu->pgt; if (mmu_invalidate_retry(kvm, mmu_seq)) { ret =3D -EAGAIN; @@ -1696,16 +1743,16 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, ph= ys_addr_t fault_ipa, * PTE, which will be preserved. */ prot &=3D ~KVM_NV_GUEST_MAP_SZ; - ret =3D kvm_pgtable_stage2_relax_perms(pgt, fault_ipa, prot, flags); + ret =3D KVM_PGT_S2(relax_perms, pgt, fault_ipa, prot, flags); } else { - ret =3D kvm_pgtable_stage2_map(pgt, fault_ipa, vma_pagesize, + ret =3D KVM_PGT_S2(map, pgt, fault_ipa, vma_pagesize, __pfn_to_phys(pfn), prot, memcache, flags); } =20 out_unlock: kvm_release_faultin_page(kvm, page, !!ret, writable); - read_unlock(&kvm->mmu_lock); + kvm_fault_unlock(kvm); =20 /* Mark the page dirty only if the fault is handled successfully */ if (writable && !ret) @@ -1724,7 +1771,7 @@ static void handle_access_fault(struct kvm_vcpu *vcpu= , phys_addr_t fault_ipa) =20 read_lock(&vcpu->kvm->mmu_lock); mmu =3D vcpu->arch.hw_mmu; - kvm_pgtable_stage2_mkyoung(mmu->pgt, fault_ipa, flags); + KVM_PGT_S2(mkyoung, mmu->pgt, fault_ipa, flags); read_unlock(&vcpu->kvm->mmu_lock); } =20 @@ -1764,7 +1811,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) } =20 /* Falls between the IPA range and the PARange? */ - if (fault_ipa >=3D BIT_ULL(vcpu->arch.hw_mmu->pgt->ia_bits)) { + if (fault_ipa >=3D BIT_ULL(VTCR_EL2_IPA(vcpu->arch.hw_mmu->vtcr))) { fault_ipa |=3D kvm_vcpu_get_hfar(vcpu) & GENMASK(11, 0); =20 if (is_iabt) @@ -1930,7 +1977,7 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_rang= e *range) if (!kvm->arch.mmu.pgt) return false; =20 - return kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt, + return KVM_PGT_S2(test_clear_young, kvm->arch.mmu.pgt, range->start << PAGE_SHIFT, size, true); /* @@ -1946,7 +1993,7 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn= _range *range) if (!kvm->arch.mmu.pgt) return false; =20 - return kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt, + return KVM_PGT_S2(test_clear_young, kvm->arch.mmu.pgt, range->start << PAGE_SHIFT, size, false); } --=20 2.47.1.613.gc27f4b7a9f-goog