From nobody Sat Feb 7 21:23:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09E6BC7EE43 for ; Tue, 6 Jun 2023 19:29:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239180AbjFFT3R (ORCPT ); Tue, 6 Jun 2023 15:29:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54780 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239038AbjFFT3F (ORCPT ); Tue, 6 Jun 2023 15:29:05 -0400 Received: from mail-io1-xd49.google.com (mail-io1-xd49.google.com [IPv6:2607:f8b0:4864:20::d49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D936510D1 for ; Tue, 6 Jun 2023 12:29:03 -0700 (PDT) Received: by mail-io1-xd49.google.com with SMTP id ca18e2360f4ac-7776b76cc59so562908439f.2 for ; Tue, 06 Jun 2023 12:29:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1686079743; x=1688671743; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=WFJK44bg6mTFZNh04T55o8cqW/XBy3bKKtAhRnNGyNM=; b=d70o+mmg/FlsnhdyTBCBIyehP5Om6Pi4yYAx8ru+yyhjlJj4dQB5ICb64pXkWgEIks zKDhwL4Vu2j6CIbfJTHkJeB6o4e6AE+P95KDOo1cPZtFK41yeFXHDkV479VAHm1k95Da EfdPWWpizZJo4KoZQvbTPMHWFMWOfhDMYax306cTGzHURmuxfF3y3EuxDxZ4M6zzm4yE ACvMInyOE9fEjfp3nb/+yUAmMe85XXR/7ims7FSGsxkN/9pZwB60OoHuGbWNXFiUq5gq r/gsGyEq4rnUhc1sYYx1YmTufkR+WCSQN8FBaCZ3SeFPQqhBnquezv46nGxcZw4V6dOc rczg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686079743; x=1688671743; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=WFJK44bg6mTFZNh04T55o8cqW/XBy3bKKtAhRnNGyNM=; b=ZnWJKTXUFD3Hd2sgiXUoi7YddItZye/CDZ+YIpo3AUl+izbLVdqHrE/rI2w6UJRMD+ 597xaCsI0ezT5rOSs/yc5y77s3k4A4omQPxJutdQ9RhC7COm/yq7lA/hjsP+XceeJS2i 0mlGI1jmqUOgRMDxOe9lOKVsx6Lm2RANPlri4jbvrBiscHGyalHsmWeQa2/Uzi/6zzjy 4QsEdUi7Kto0ZgH8Ou1HiBnN2Mf2VAWM2UEZqbf5utecVFzPb2xBWDrdIW56BzVGcTDW kMfNOBd8MzMWqkVxV5Ukl81GrsMCf9ERI1B6jsUDANm/hoVlSUvKTTCytBAjvUzAUm5n Jadg== X-Gm-Message-State: AC+VfDyKBXFN1OCCTNtGBfOafuZnpmNy/fEvttaRZTeOT3/SexoRogPu toHVL/e6igu3a4dGk0dlhaa/SWRcEk4P X-Google-Smtp-Source: ACHHUZ62Hug8P99peZzbJUWEuxmHtWhbRFuSd2AYuFgkpN4/A/feJCg3inpG9Gm+vzkRn3a+VIu54mcG95EX X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a6b:6102:0:b0:774:8f36:bb8e with SMTP id v2-20020a6b6102000000b007748f36bb8emr1571026iob.2.1686079743400; Tue, 06 Jun 2023 12:29:03 -0700 (PDT) Date: Tue, 6 Jun 2023 19:28:52 +0000 In-Reply-To: <20230606192858.3600174-1-rananta@google.com> Mime-Version: 1.0 References: <20230606192858.3600174-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230606192858.3600174-2-rananta@google.com> Subject: [PATCH v5 1/7] arm64: tlb: Refactor the core flush algorithm of __flush_tlb_range From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Catalin Marinas Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently, the core TLB flush functionality of __flush_tlb_range() hardcodes vae1is (and variants) for the flush operation. In the upcoming patches, the KVM code reuses this core algorithm with ipas2e1is for range based TLB invalidations based on the IPA. Hence, extract the core flush functionality of __flush_tlb_range() into its own macro that accepts an 'op' argument to pass any TLBI operation, such that other callers (KVM) can benefit. No functional changes intended. Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Catalin Marinas --- arch/arm64/include/asm/tlbflush.h | 108 +++++++++++++++--------------- 1 file changed, 55 insertions(+), 53 deletions(-) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlb= flush.h index 412a3b9a3c25d..4775378b6da1b 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -278,14 +278,61 @@ static inline void flush_tlb_page(struct vm_area_stru= ct *vma, */ #define MAX_TLBI_OPS PTRS_PER_PTE =20 +/* When the CPU does not support TLB range operations, flush the TLB + * entries one by one at the granularity of 'stride'. If the TLB + * range ops are supported, then: + * + * 1. If 'pages' is odd, flush the first page through non-range + * operations; + * + * 2. For remaining pages: the minimum range granularity is decided + * by 'scale', so multiple range TLBI operations may be required. + * Start from scale =3D 0, flush the corresponding number of pages + * ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it + * until no pages left. + * + * Note that certain ranges can be represented by either num =3D 31 and + * scale or num =3D 0 and scale + 1. The loop below favours the latter + * since num is limited to 30 by the __TLBI_RANGE_NUM() macro. + */ +#define __flush_tlb_range_op(op, start, pages, stride, \ + asid, tlb_level, tlbi_user) do { \ + int num =3D 0; \ + int scale =3D 0; \ + unsigned long addr; \ + \ + while (pages > 0) { \ + if (!system_supports_tlb_range() || \ + pages % 2 =3D=3D 1) { \ + addr =3D __TLBI_VADDR(start, asid); \ + __tlbi_level(op, addr, tlb_level); \ + if (tlbi_user) \ + __tlbi_user_level(op, addr, tlb_level); \ + start +=3D stride; \ + pages -=3D stride >> PAGE_SHIFT; \ + continue; \ + } \ + \ + num =3D __TLBI_RANGE_NUM(pages, scale); \ + if (num >=3D 0) { \ + addr =3D __TLBI_VADDR_RANGE(start, asid, scale, \ + num, tlb_level); \ + __tlbi(r##op, addr); \ + if (tlbi_user) \ + __tlbi_user(r##op, addr); \ + start +=3D __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \ + pages -=3D __TLBI_RANGE_PAGES(num, scale); \ + } \ + scale++; \ + } \ +} while (0) + static inline void __flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end, unsigned long stride, bool last_level, int tlb_level) { - int num =3D 0; - int scale =3D 0; - unsigned long asid, addr, pages; + unsigned long asid, pages; =20 start =3D round_down(start, stride); end =3D round_up(end, stride); @@ -307,56 +354,11 @@ static inline void __flush_tlb_range(struct vm_area_s= truct *vma, dsb(ishst); asid =3D ASID(vma->vm_mm); =20 - /* - * When the CPU does not support TLB range operations, flush the TLB - * entries one by one at the granularity of 'stride'. If the TLB - * range ops are supported, then: - * - * 1. If 'pages' is odd, flush the first page through non-range - * operations; - * - * 2. For remaining pages: the minimum range granularity is decided - * by 'scale', so multiple range TLBI operations may be required. - * Start from scale =3D 0, flush the corresponding number of pages - * ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it - * until no pages left. - * - * Note that certain ranges can be represented by either num =3D 31 and - * scale or num =3D 0 and scale + 1. The loop below favours the latter - * since num is limited to 30 by the __TLBI_RANGE_NUM() macro. - */ - while (pages > 0) { - if (!system_supports_tlb_range() || - pages % 2 =3D=3D 1) { - addr =3D __TLBI_VADDR(start, asid); - if (last_level) { - __tlbi_level(vale1is, addr, tlb_level); - __tlbi_user_level(vale1is, addr, tlb_level); - } else { - __tlbi_level(vae1is, addr, tlb_level); - __tlbi_user_level(vae1is, addr, tlb_level); - } - start +=3D stride; - pages -=3D stride >> PAGE_SHIFT; - continue; - } - - num =3D __TLBI_RANGE_NUM(pages, scale); - if (num >=3D 0) { - addr =3D __TLBI_VADDR_RANGE(start, asid, scale, - num, tlb_level); - if (last_level) { - __tlbi(rvale1is, addr); - __tlbi_user(rvale1is, addr); - } else { - __tlbi(rvae1is, addr); - __tlbi_user(rvae1is, addr); - } - start +=3D __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; - pages -=3D __TLBI_RANGE_PAGES(num, scale); - } - scale++; - } + if (last_level) + __flush_tlb_range_op(vale1is, start, pages, stride, asid, tlb_level, tru= e); + else + __flush_tlb_range_op(vae1is, start, pages, stride, asid, tlb_level, true= ); + dsb(ish); } =20 --=20 2.41.0.rc0.172.g3f132b7071-goog From nobody Sat Feb 7 21:23:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85E11C7EE43 for ; Tue, 6 Jun 2023 19:29:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239041AbjFFT3Y (ORCPT ); Tue, 6 Jun 2023 15:29:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54792 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239079AbjFFT3G (ORCPT ); Tue, 6 Jun 2023 15:29:06 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 230A210CE for ; Tue, 6 Jun 2023 12:29:05 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5692be06cb2so90832257b3.1 for ; Tue, 06 Jun 2023 12:29:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1686079744; x=1688671744; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=y/6r0APF1ZckoGqXlaGvs8Fu6vhIxmcIHa2PTldZqrc=; b=HROLBGcLzE2atijiNX1ii4qKy1NnZjDVJ2kEkcVq7akH6GXfAU/PURTOYeA1Fd3irL loZJcuzeMJQWiPc+eco7vhtWoXqUp+IN5FkojMhYMlCiNpiw0fngJVyEFOcVxVyWnFQh +gcgaUw7f215ouppXeJewToyof44pQBb6BMXlpmlQXBXulkOf14wsE2M+dzGwXRJXGDO tXYBE0nFA7WI3kdoYPlUzPvl3zlKkxhGXpRIxX3eMo9NQ0NTn6HpNV1hyxUFHYQvVx79 7pEjVRLezWD4bN8CzhqGspEo7nDCUf/xgwe0hSdoiB3Hz7hn4zw5wVWFGuOYWVlr+Pd6 MDQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686079744; x=1688671744; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=y/6r0APF1ZckoGqXlaGvs8Fu6vhIxmcIHa2PTldZqrc=; b=VxmNlZUUu0hu+Osz77MHfKNSr0aABk2XPiOoTX+stSCGPJyOfMr8bvH6vg0xdJmt70 ddLB7NdDcEDqS9DY2B36nXVPRdoh1VBBNnRMlI+hfFWOIKOP9aJj/fAFwuwJB3DZf/3P CZOD8eHYKWUYrNDxpkbwa2lLMrGtLBggtcdNWthBTTxGYmmpT8+GH654e2JB5U7oDyw9 VbRsFib18xJ9H2UAKfzowgIjRPb30JCgIfARxyRqYRN0LeVx7hLINotnWVYC6/hBGjKN agGz/8/mo/f3Cy5jH3JIizWQaQbmnWW4HyrqUAI3JZSGVo6mLNph0IYbtDs+nP8EoBj/ 41MA== X-Gm-Message-State: AC+VfDzHm9DIXpAWflcFR3X3jQG7uk0319jrM2A4nGM5OJRHHEtpRHCB uEXhP93SzUVcPIkSGSPiyPHszOiaQpoQ X-Google-Smtp-Source: ACHHUZ54ovTJrXatnDidaPzJUKJNjQF3PJ8x+yDMQFY1Hlip1RgYFAk5ctLOp86VAyj2wEpJVtXaO/c4qHQg X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:690c:708:b0:568:ee6d:3364 with SMTP id bs8-20020a05690c070800b00568ee6d3364mr1523405ywb.4.1686079744392; Tue, 06 Jun 2023 12:29:04 -0700 (PDT) Date: Tue, 6 Jun 2023 19:28:53 +0000 In-Reply-To: <20230606192858.3600174-1-rananta@google.com> Mime-Version: 1.0 References: <20230606192858.3600174-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230606192858.3600174-3-rananta@google.com> Subject: [PATCH v5 2/7] KVM: arm64: Implement __kvm_tlb_flush_vmid_range() From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Define __kvm_tlb_flush_vmid_range() (for VHE and nVHE) to flush a range of stage-2 page-tables using IPA in one go. If the system supports FEAT_TLBIRANGE, the following patches would conviniently replace global TLBI such as vmalls12e1is in the map, unmap, and dirty-logging paths with ripas2e1is instead. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/include/asm/kvm_asm.h | 3 +++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 11 +++++++++++ arch/arm64/kvm/hyp/nvhe/tlb.c | 30 ++++++++++++++++++++++++++++++ arch/arm64/kvm/hyp/vhe/tlb.c | 28 ++++++++++++++++++++++++++++ 4 files changed, 72 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_= asm.h index 43c3bc0f9544d..60ed0880cc9d6 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -69,6 +69,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa, __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid, + __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_range, __KVM_HOST_SMCCC_FUNC___kvm_flush_cpu_context, __KVM_HOST_SMCCC_FUNC___kvm_timer_set_cntvoff, __KVM_HOST_SMCCC_FUNC___vgic_v3_read_vmcr, @@ -225,6 +226,8 @@ extern void __kvm_flush_vm_context(void); extern void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu); extern void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t i= pa, int level); +extern void __kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t start, unsigned long pages); extern void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu); =20 extern void __kvm_timer_set_cntvoff(u64 cntvoff); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 728e01d4536b0..a19a9299c8362 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -125,6 +125,16 @@ static void handle___kvm_tlb_flush_vmid_ipa(struct kvm= _cpu_context *host_ctxt) __kvm_tlb_flush_vmid_ipa(kern_hyp_va(mmu), ipa, level); } =20 +static void +handle___kvm_tlb_flush_vmid_range(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); + DECLARE_REG(phys_addr_t, start, host_ctxt, 2); + DECLARE_REG(unsigned long, pages, host_ctxt, 3); + + __kvm_tlb_flush_vmid_range(kern_hyp_va(mmu), start, pages); +} + static void handle___kvm_tlb_flush_vmid(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); @@ -316,6 +326,7 @@ static const hcall_t host_hcall[] =3D { HANDLE_FUNC(__kvm_flush_vm_context), HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa), HANDLE_FUNC(__kvm_tlb_flush_vmid), + HANDLE_FUNC(__kvm_tlb_flush_vmid_range), HANDLE_FUNC(__kvm_flush_cpu_context), HANDLE_FUNC(__kvm_timer_set_cntvoff), HANDLE_FUNC(__vgic_v3_read_vmcr), diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c index 978179133f4b9..213b11952f641 100644 --- a/arch/arm64/kvm/hyp/nvhe/tlb.c +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c @@ -130,6 +130,36 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, __tlb_switch_to_host(&cxt); } =20 +void __kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t start, unsigned long pages) +{ + struct tlb_inv_context cxt; + unsigned long stride; + + /* + * Since the range of addresses may not be mapped at + * the same level, assume the worst case as PAGE_SIZE + */ + stride =3D PAGE_SIZE; + start =3D round_down(start, stride); + + /* Switch to requested VMID */ + __tlb_switch_to_guest(mmu, &cxt, false); + + __flush_tlb_range_op(ipas2e1is, start, pages, stride, 0, 0, false); + + dsb(ish); + __tlbi(vmalle1is); + dsb(ish); + isb(); + + /* See the comment below in __kvm_tlb_flush_vmid_ipa() */ + if (icache_is_vpipt()) + icache_inval_all_pou(); + + __tlb_switch_to_host(&cxt); +} + void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) { struct tlb_inv_context cxt; diff --git a/arch/arm64/kvm/hyp/vhe/tlb.c b/arch/arm64/kvm/hyp/vhe/tlb.c index 24cef9b87f9e9..3ca3d38b7eb23 100644 --- a/arch/arm64/kvm/hyp/vhe/tlb.c +++ b/arch/arm64/kvm/hyp/vhe/tlb.c @@ -111,6 +111,34 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, __tlb_switch_to_host(&cxt); } =20 +void __kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t start, unsigned long pages) +{ + struct tlb_inv_context cxt; + unsigned long stride; + + /* + * Since the range of addresses may not be mapped at + * the same level, assume the worst case as PAGE_SIZE + */ + stride =3D PAGE_SIZE; + start =3D round_down(start, stride); + + dsb(ishst); + + /* Switch to requested VMID */ + __tlb_switch_to_guest(mmu, &cxt); + + __flush_tlb_range_op(ipas2e1is, start, pages, stride, 0, 0, false); + + dsb(ish); + __tlbi(vmalle1is); + dsb(ish); + isb(); + + __tlb_switch_to_host(&cxt); +} + void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) { struct tlb_inv_context cxt; --=20 2.41.0.rc0.172.g3f132b7071-goog From nobody Sat Feb 7 21:23:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD4EAC83005 for ; Tue, 6 Jun 2023 19:29:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239193AbjFFT3c (ORCPT ); Tue, 6 Jun 2023 15:29:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54990 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239022AbjFFT3P (ORCPT ); Tue, 6 Jun 2023 15:29:15 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4D1CD10EA for ; Tue, 6 Jun 2023 12:29:06 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-bac6a453dd5so7652295276.2 for ; Tue, 06 Jun 2023 12:29:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1686079745; x=1688671745; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=oJX97QI+ti3MLO2eZ0RB5q+cUC0tg2B7MRgGhqPSdoU=; b=RcVezT/+/Q8mG0UYDZnsdFHKOHniiQuIpjqjLx/cY/g1xzgASwICXK9Wqgs9eVucjT HIGAvPfE8ZgZh7cy7t0nQdK82U5/rzUnJKV3VS4vqSTdMjvp99ELBTEKYTE6OZecdZho 651BWNdkC19a1JqYSWSyjmzOMAjT1wQhAL+FMk40w16n9aRdV/fLLle2cLbl33qZPyq7 Z9IVaUYiI90fOFYfbURYEQK3I6FUdKje/TNTIkA3zjRF0WC7MK0bvKYng9obi+zCX1Z0 40qfAe5u8OsWTqN6PqApBq1e4MfndxJKZLIILNempK+BnBKPZjEznus0nLi/aKu/5krT 4ntg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686079745; x=1688671745; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=oJX97QI+ti3MLO2eZ0RB5q+cUC0tg2B7MRgGhqPSdoU=; b=gJSjIKAmvgdp4LxLwdogRT/1oWGxpoicz+2rD3Cj0wa7C/YDuD3SV9TjpVhCWujYsS Xh7Die8ExCGtYPOcEfk+UKKdg8pY/HpTLiEP55xvRXubP/zT93Lgdn3pFoLy5pw883Np v8Q3eF/7MFJ6ZnBIbO7wO58IcGmYEg56Mu1gOxO/nVzQCCFtUsruLBEavcZe+aJvOFLP sYF2nqF4aTxd2dMObuV7UX5X8Wl506Sqzc4/rRXuZt6TUY8kKCLOn8Hq5GHm9uBaSy70 V/rWfddJhQz7mJZ4FcMFasf1sQXVNpUCZS4zqSjFOYg2gSPiks+yurKU2l/XOWQLxIJf anyw== X-Gm-Message-State: AC+VfDyos8XoYtsjR6YO35lADRCbAv4dpkcGTXV22a4xJfymj3WrR5Jm BIGcrYrzvkS4SO98TDLHfwSxBMqjvqlp X-Google-Smtp-Source: ACHHUZ4utofyeOL86/Kh7XYvV5vjDO1rXmuNvLh41L3YVuhJgVaiYRIoiVec1XFLov1RpXm667fMK2Lo8ndc X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6902:1886:b0:ba8:4b22:4e8a with SMTP id cj6-20020a056902188600b00ba84b224e8amr1774981ybb.0.1686079745555; Tue, 06 Jun 2023 12:29:05 -0700 (PDT) Date: Tue, 6 Jun 2023 19:28:54 +0000 In-Reply-To: <20230606192858.3600174-1-rananta@google.com> Mime-Version: 1.0 References: <20230606192858.3600174-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230606192858.3600174-4-rananta@google.com> Subject: [PATCH v5 3/7] KVM: arm64: Define kvm_tlb_flush_vmid_range() From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Implement the helper kvm_tlb_flush_vmid_range() that acts as a wrapper for range-based TLB invalidations. For the given VMID, use the range-based TLBI instructions to do the job or fallback to invalidating all the TLB entries. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/include/asm/kvm_pgtable.h | 10 ++++++++++ arch/arm64/kvm/hyp/pgtable.c | 20 ++++++++++++++++++++ 2 files changed, 30 insertions(+) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/= kvm_pgtable.h index 4cd6762bda805..1b12295a83595 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -682,4 +682,14 @@ enum kvm_pgtable_prot kvm_pgtable_stage2_pte_prot(kvm_= pte_t pte); * kvm_pgtable_prot format. */ enum kvm_pgtable_prot kvm_pgtable_hyp_pte_prot(kvm_pte_t pte); + +/** + * kvm_tlb_flush_vmid_range() - Invalidate/flush a range of TLB entries + * + * @mmu: Stage-2 KVM MMU struct + * @addr: The base Intermediate physical address from which to invalidate + * @size: Size of the range from the base to invalidate + */ +void kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t addr, size_t size); #endif /* __ARM64_KVM_PGTABLE_H__ */ diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 3d61bd3e591d2..df8ac14d9d3d4 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -631,6 +631,26 @@ static bool stage2_has_fwb(struct kvm_pgtable *pgt) return !(pgt->flags & KVM_PGTABLE_S2_NOFWB); } =20 +void kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t addr, size_t size) +{ + unsigned long pages, inval_pages; + + if (!system_supports_tlb_range()) { + kvm_call_hyp(__kvm_tlb_flush_vmid, mmu); + return; + } + + pages =3D size >> PAGE_SHIFT; + while (pages > 0) { + inval_pages =3D min(pages, MAX_TLBI_RANGE_PAGES); + kvm_call_hyp(__kvm_tlb_flush_vmid_range, mmu, addr, inval_pages); + + addr +=3D inval_pages << PAGE_SHIFT; + pages -=3D inval_pages; + } +} + #define KVM_S2_MEMATTR(pgt, attr) PAGE_S2_MEMATTR(attr, stage2_has_fwb(pgt= )) =20 static int stage2_set_prot_attr(struct kvm_pgtable *pgt, enum kvm_pgtable_= prot prot, --=20 2.41.0.rc0.172.g3f132b7071-goog From nobody Sat Feb 7 21:23:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 47690C7EE43 for ; Tue, 6 Jun 2023 19:30:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239464AbjFFT3s (ORCPT ); Tue, 6 Jun 2023 15:29:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55000 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239170AbjFFT3P (ORCPT ); Tue, 6 Jun 2023 15:29:15 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 721D11701 for ; Tue, 6 Jun 2023 12:29:07 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-bacfa4eefcbso13892143276.1 for ; Tue, 06 Jun 2023 12:29:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1686079746; x=1688671746; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3WYVW7sMItJ+/qNz0zQExZlTscw6/G0/crBTyowFvIE=; b=5IlabwCGrIyj/AGVDbv02tLgdBhugCUdfaXNCXYMSgWmeO4GWMwq9kUV8ZLcrznmtJ 9w/fRVw2Q8vQO+X8PD0RiK+E4de/ZlpNpjX//3+ZJ/z1bgt4IWCS+WCqcmPYchdYMhu7 6ZyTyuOgCdCzQ6Qd9R9LhK4vGuuDSFhKZxBgCoDZYz1ivx6gHwTZc2aV0zJ34iKVwEts b+wR3veNb0Evao1qV8B/2Wkryz4orAqDviNP7ie3it9vbdqD20hDgy70q8P3ZsyWQY24 V90tcmm1HDL61ztZ1eduvI73n/LrSx8omh8WnuH0h1b7VOBl7pbLKx3+xvncMPyB0MAp TYmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686079746; x=1688671746; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3WYVW7sMItJ+/qNz0zQExZlTscw6/G0/crBTyowFvIE=; b=cRYjfEuG73DEYfJeZueajCyDh6rFvpuQYbVBaMG6EXrP04muJkhvMiLRXI0mn5uFVy mDxVAqZT27Zu/Jv66YDUAOFI3dwY/9i1KWdxVMMGZyekObCuuL/dZgQQGVTmrzHQ6vOv Vg9YWpwKcamcL1cPYJevSCbwPxAtrh1A+Aa+TppI8vOgUNLHt2RI8u9cCWs+gQOY/rF3 ZcJB38YTp0Tl1c0uYCcyGe3qCRHrG0/niOztzUBf3GT6XMhu3aUN6MQRFuSvo6wgxGtB Cg/Uk9PS/dxcOwbku6u962EmSvIedrAws5H8r1O7NM5E5bxqNVYWmGfmW2bpc6A/E+l0 kt2w== X-Gm-Message-State: AC+VfDx0QVqzwXbOUkMN4OAaqxP+dEJkz8UrB9+bAow3x2yq2vPRAIee zqBiUjY6FrKBTTzu88e/u6ovsM8kgcCw X-Google-Smtp-Source: ACHHUZ6EqwKWg0miPDGHXqsiKtPZWCKDCcQHa7RLTfS8/ASb2ZH4UEIkh0Qo1ATHiY2DnDeSbDhiY4EFJcXm X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a25:aac6:0:b0:b9e:7fbc:15e1 with SMTP id t64-20020a25aac6000000b00b9e7fbc15e1mr8330733ybi.0.1686079746618; Tue, 06 Jun 2023 12:29:06 -0700 (PDT) Date: Tue, 6 Jun 2023 19:28:55 +0000 In-Reply-To: <20230606192858.3600174-1-rananta@google.com> Mime-Version: 1.0 References: <20230606192858.3600174-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230606192858.3600174-5-rananta@google.com> Subject: [PATCH v5 4/7] KVM: arm64: Implement kvm_arch_flush_remote_tlbs_range() From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Implement kvm_arch_flush_remote_tlbs_range() for arm64 to invalidate the given range in the TLB. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/include/asm/kvm_host.h | 3 +++ arch/arm64/kvm/mmu.c | 7 +++++++ 2 files changed, 10 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 81ab41b84f436..343fb530eea9c 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1081,6 +1081,9 @@ struct kvm *kvm_arch_alloc_vm(void); #define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS int kvm_arch_flush_remote_tlbs(struct kvm *kvm); =20 +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64= pages); + static inline bool kvm_vm_is_protected(struct kvm *kvm) { return false; diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index d0a0d3dca9316..c3ec2141c3284 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -92,6 +92,13 @@ int kvm_arch_flush_remote_tlbs(struct kvm *kvm) return 0; } =20 +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64= pages) +{ + kvm_tlb_flush_vmid_range(&kvm->arch.mmu, + start_gfn << PAGE_SHIFT, pages << PAGE_SHIFT); + return 0; +} + static bool kvm_is_device_pfn(unsigned long pfn) { return !pfn_is_map_memory(pfn); --=20 2.41.0.rc0.172.g3f132b7071-goog From nobody Sat Feb 7 21:23:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7E0AC83003 for ; Tue, 6 Jun 2023 19:29:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239186AbjFFT3g (ORCPT ); Tue, 6 Jun 2023 15:29:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55012 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239178AbjFFT3P (ORCPT ); Tue, 6 Jun 2023 15:29:15 -0400 Received: from mail-io1-xd4a.google.com (mail-io1-xd4a.google.com [IPv6:2607:f8b0:4864:20::d4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 27E821703 for ; Tue, 6 Jun 2023 12:29:08 -0700 (PDT) Received: by mail-io1-xd4a.google.com with SMTP id ca18e2360f4ac-77751dc936eso555474439f.0 for ; Tue, 06 Jun 2023 12:29:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1686079747; x=1688671747; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=vmdPfikIRRvnyRjXE+lL2qXckkyOSYAQTpf3USgKEV0=; b=NHF1fXOFFod7VAbBEt8UCeYZ0LWUlIN+Z6b57RJ3dYkbyPMi44oby9chiAOpKFs6un rhxs+wXMHvdYt4NYsIrtkVl2YKSo3zk8tNR2TwujCnBckhIbpVAaktlHB3QijHKNfJUe hkb8y/6aZZhYsF9WaKPqZYo3xyz17qUNtYekIJTDlgCURWGI245/YxQk2wkmQm7hJ4hL xayxtGHFCypiY81y05iqASARTfMp4rlZxyu38FOX8d7dloa7uO6fa0YCW5u90zQup9cK dJkRfasVP0S5ITGJLJepNh3XaHWNfVUfe+xqGnGRaOc8k4fa1vead1/j2XUFEnm8uJDb bDQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686079747; x=1688671747; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=vmdPfikIRRvnyRjXE+lL2qXckkyOSYAQTpf3USgKEV0=; b=NSO+S3uS8MgJxxBQ7ZxIR6uNDBez5dPrSl5iYaClY93AfH2jAlNzDzRPFxeTWD0dsz 0a65HWYikWOJXfLcJ8HScrQY6vIEEyqmQVSWFStI0i0nU8VloaMouWheaIlO/R5M+j/I yP3TuXzxqz4VrHJVsof4GR06aK9OrmGIpfm/rnveS4cnuWUd+D7S3v3xdBSgU4Rn3gOf 1HHDXOZnWDIjxkrPdHk6rUT/cjZ015BtS6DvNwWjpnc5skFYpVKZ67XG8H2oTDFdMlCx vpMMxk7sYBgw/v9ldGOLjGWS9ZC3Z4LvAR+dmnMcdaWdS0yfr3JYBKdFs55D2h86NVC9 kTcQ== X-Gm-Message-State: AC+VfDyWzMqizf3kY/qqw8q+LNpmIVabIGntIGH/lGz9gvz8wyl/Bg7U xhnFaBhQyUlBMrGKSZRrc0k5fbisWdM6 X-Google-Smtp-Source: ACHHUZ7BFKutOddfAvI8ltY6fxG4ugpMD+YbWGB0EQ+yV6UaPweh0t3ry+B0nZFcGDzDjvtX8EMf8dfFMhPB X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a6b:6e01:0:b0:774:9732:324 with SMTP id d1-20020a6b6e01000000b0077497320324mr1569372ioh.2.1686079747656; Tue, 06 Jun 2023 12:29:07 -0700 (PDT) Date: Tue, 6 Jun 2023 19:28:56 +0000 In-Reply-To: <20230606192858.3600174-1-rananta@google.com> Mime-Version: 1.0 References: <20230606192858.3600174-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230606192858.3600174-6-rananta@google.com> Subject: [PATCH v5 5/7] KVM: arm64: Flush only the memslot after write-protect From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" After write-protecting the region, currently KVM invalidates the entire TLB entries using kvm_flush_remote_tlbs(). Instead, scope the invalidation only to the targeted memslot. If supported, the architecture would use the range-based TLBI instructions to flush the memslot or else fallback to flushing all of the TLBs. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/kvm/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index c3ec2141c3284..94f10e670c100 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -992,7 +992,7 @@ static void kvm_mmu_wp_memory_region(struct kvm *kvm, i= nt slot) write_lock(&kvm->mmu_lock); stage2_wp_range(&kvm->arch.mmu, start, end); write_unlock(&kvm->mmu_lock); - kvm_flush_remote_tlbs(kvm); + kvm_flush_remote_tlbs_memslot(kvm, memslot); } =20 /** --=20 2.41.0.rc0.172.g3f132b7071-goog From nobody Sat Feb 7 21:23:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1D43C7EE43 for ; Tue, 6 Jun 2023 19:29:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239203AbjFFT3l (ORCPT ); Tue, 6 Jun 2023 15:29:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54990 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239301AbjFFT3P (ORCPT ); Tue, 6 Jun 2023 15:29:15 -0400 Received: from mail-io1-xd49.google.com (mail-io1-xd49.google.com [IPv6:2607:f8b0:4864:20::d49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2279E1725 for ; Tue, 6 Jun 2023 12:29:09 -0700 (PDT) Received: by mail-io1-xd49.google.com with SMTP id ca18e2360f4ac-7776dd75224so447284739f.2 for ; Tue, 06 Jun 2023 12:29:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1686079748; x=1688671748; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=mSh72g4ZHJDv4/8vl2ojF6XiepiDvTufhm8MWFlXvRw=; b=DgqodgA0VlUZWQrMG4GP7C/weUqyJOTcJeeBU+QJtyx73nrL97iFH09J3NN1a3ZqZp HnGGp8mag6nk53M7ixB65x5AD6Ms56KjI1ef3YbrAECkzd/sBiP49ZaRawRDGSG6W/48 zpTIZ9xiNUakwan9Sah7O3FQZq8hbYO+vJYMpPhkX+vPtaFLwYWWIJatBbkiuWPko3KK xLOP8K2IyKlFQWL5H/yaFnL404Xk/QQOl56Czr3QPqSe5ks+kz2URjUEHgddpbJaiPlb GYXlIgD7DvVtN5XvZx5Rt/1wo3vRwOLcLfUaazlKhmWPNVBuGWB5FFLS01Y/RwmF7Ac/ NxtQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686079748; x=1688671748; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=mSh72g4ZHJDv4/8vl2ojF6XiepiDvTufhm8MWFlXvRw=; b=HafdSrKXtQSLI1zfx747b+4e9zTR1ydSlI4OLnKB37zNvQdDR65DSGaQJ6o73Cp66g K6vRPFBfPv0vejRaDy8memIosDKGvTQSZtVo6hLkxh4wM/L9RHx1MM5zKwYS6kpbQYNM A1O4I6Xu8im2z8C4KoS495nRuIh4dGytEOykSezfP0L4TOe6ssjgwHBBzlMYAM5T+p4w 2vANEdtTVVNWqIJIjzPYcwKkOzgG27GiXlaqwONQDxYbMV8BuciBs1y+s+ggrPKINuoF F3QZq0PyCcVoJj+/5vBbF0YsBu3UWcVpYGiKezM6e4PPLhlV+SkpjC2xDvDTR+20730N hbhw== X-Gm-Message-State: AC+VfDwbu37/NembPd6uORV7LmH12m+3H40ZU+ZwY/oId50zDOkbmCfO tqmpb88wllIvLvmp77eNl6qfpBe6JKQb X-Google-Smtp-Source: ACHHUZ7CbVG1v9+02j/lP6GitPjQFKqy8MFcVF4f6QKfj749lOkIhDCulQsPILWvgJRfRfbdJbdIHNk5CYhj X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a02:85e6:0:b0:416:7e77:bb5f with SMTP id d93-20020a0285e6000000b004167e77bb5fmr1392579jai.0.1686079748520; Tue, 06 Jun 2023 12:29:08 -0700 (PDT) Date: Tue, 6 Jun 2023 19:28:57 +0000 In-Reply-To: <20230606192858.3600174-1-rananta@google.com> Mime-Version: 1.0 References: <20230606192858.3600174-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230606192858.3600174-7-rananta@google.com> Subject: [PATCH v5 6/7] KVM: arm64: Invalidate the table entries upon a range From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently, during the operations such as a hugepage collapse, KVM would flush the entire VM's context using 'vmalls12e1is' TLBI operation. Specifically, if the VM is faulting on many hugepages (say after dirty-logging), it creates a performance penalty for the guest whose pages have already been faulted earlier as they would have to refill their TLBs again. Instead, leverage kvm_tlb_flush_vmid_range() for table entries. If the system supports it, only the required range will be flushed. Else, it'll fallback to the previous mechanism. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/kvm/hyp/pgtable.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index df8ac14d9d3d4..50ef7623c54db 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -766,7 +766,8 @@ static bool stage2_try_break_pte(const struct kvm_pgtab= le_visit_ctx *ctx, * value (if any). */ if (kvm_pte_table(ctx->old, ctx->level)) - kvm_call_hyp(__kvm_tlb_flush_vmid, mmu); + kvm_tlb_flush_vmid_range(mmu, ctx->addr, + kvm_granule_size(ctx->level)); else if (kvm_pte_valid(ctx->old)) kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, ctx->addr, ctx->level); =20 --=20 2.41.0.rc0.172.g3f132b7071-goog From nobody Sat Feb 7 21:23:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0519C7EE43 for ; Tue, 6 Jun 2023 19:29:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238401AbjFFT3o (ORCPT ); Tue, 6 Jun 2023 15:29:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54992 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239373AbjFFT3Q (ORCPT ); Tue, 6 Jun 2023 15:29:16 -0400 Received: from mail-io1-xd49.google.com (mail-io1-xd49.google.com [IPv6:2607:f8b0:4864:20::d49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 28C421733 for ; Tue, 6 Jun 2023 12:29:10 -0700 (PDT) Received: by mail-io1-xd49.google.com with SMTP id ca18e2360f4ac-777b8c9cc4aso181201539f.3 for ; Tue, 06 Jun 2023 12:29:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1686079749; x=1688671749; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=VhTidRsQ7M0B21RWWeLYKjL/CqqK7Lwi7zT2NAAb1NY=; b=xRdfPtc42t3tWbTgM9ApieV/cDRl7yttpemwXRGTQ29qIVK/7EzJsoDi1w7500YQxK VkZroFYdAFjD4L8oeQoYclUdM3xt/ovyCUAsu5x/j9Pe6iUmDMSfCEArYp2QxzAQ5O6Z A/hHw65k++MQ3lkHpkZeObv/86BZddej7Q6Owxqr+0GDTHWYhA8Ekyxm69emOOf0VJXJ r1yUFL1fTbbmeDyUpxa5IqlBqW2Q9ZuwWKmdPrY9jzAqk7eMKbPnTCqjzQ6g6gu6Rezb Qs6bqyRlTvej95KtDkRzujhhevLRfzsMlKJ3gKk5lfozadujZQAR7894f9QC/T7/hDpu E4Gg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686079749; x=1688671749; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=VhTidRsQ7M0B21RWWeLYKjL/CqqK7Lwi7zT2NAAb1NY=; b=DZKKH8jCGJd32tBag0+WsAkpTmiCipOO0tGtnqOigFjsbN3mdet+qJ5A5d2ZrcPzPl zC3TQCZL67xiQ4IROvRusi3oozEtr3z8tS1tyBDnLj4ZWO3HCTWCK5EPzdFnl6dZDIsL 9qceUsrroTqpsOBOgo1kgOGeX0es9rWXv+qhG0j0bdgZ4jTXnC0UiLNJoxN0eiIAeUlq 8X+wDKvEhjz57iK6jIWkq9ZbOTGLV/5x8+i786sIJZQSVBAAOGl1obipwVxzHR0T+6SF yAuYOiopCG+SBo9NsDbuT3yQr9aTawZF91KMR5FHloDSjDbzPVgsuSyilgKF6piY7GjD llzg== X-Gm-Message-State: AC+VfDw/oBq2pfuLn2FpbDCBzHoQFXERFeohiCYjRSpdvMnEad+IA6Kd +DZjSPwnqRrs5SiOZnf312elqn2PILaV X-Google-Smtp-Source: ACHHUZ7QCQzpJGrZKqBh5n3es8NB7ieKOI67W6rTPhyCwNnbXcHk5AAG6ZdYbhJ9YFPUUVxtemi/vNZ/436T X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a5e:8345:0:b0:777:b726:3e17 with SMTP id y5-20020a5e8345000000b00777b7263e17mr1513935iom.3.1686079749368; Tue, 06 Jun 2023 12:29:09 -0700 (PDT) Date: Tue, 6 Jun 2023 19:28:58 +0000 In-Reply-To: <20230606192858.3600174-1-rananta@google.com> Mime-Version: 1.0 References: <20230606192858.3600174-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230606192858.3600174-8-rananta@google.com> Subject: [PATCH v5 7/7] KVM: arm64: Use TLBI range-based intructions for unmap From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The current implementation of the stage-2 unmap walker traverses the given range and, as a part of break-before-make, performs TLB invalidations with a DSB for every PTE. A multitude of this combination could cause a performance bottleneck on some systems. Hence, if the system supports FEAT_TLBIRANGE, defer the TLB invalidations until the entire walk is finished, and then use range-based instructions to invalidate the TLBs in one go. Condition deferred TLB invalidation on the system supporting FWB, as the optimization is entirely pointless when the unmap walker needs to perform CMOs. Rename stage2_put_pte() to stage2_unmap_put_pte() as the function now serves the stage-2 unmap walker specifically, rather than acting generic. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/kvm/hyp/pgtable.c | 67 +++++++++++++++++++++++++++++++----- 1 file changed, 58 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 50ef7623c54db..c6e080867919d 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -789,16 +789,54 @@ static void stage2_make_pte(const struct kvm_pgtable_= visit_ctx *ctx, kvm_pte_t n smp_store_release(ctx->ptep, new); } =20 -static void stage2_put_pte(const struct kvm_pgtable_visit_ctx *ctx, struct= kvm_s2_mmu *mmu, - struct kvm_pgtable_mm_ops *mm_ops) +struct stage2_unmap_data { + struct kvm_pgtable *pgt; + bool defer_tlb_flush_init; +}; + +static bool __stage2_unmap_defer_tlb_flush(struct kvm_pgtable *pgt) +{ + /* + * If FEAT_TLBIRANGE is implemented, defer the individual + * TLB invalidations until the entire walk is finished, and + * then use the range-based TLBI instructions to do the + * invalidations. Condition deferred TLB invalidation on the + * system supporting FWB, as the optimization is entirely + * pointless when the unmap walker needs to perform CMOs. + */ + return system_supports_tlb_range() && stage2_has_fwb(pgt); +} + +static bool stage2_unmap_defer_tlb_flush(struct stage2_unmap_data *unmap_d= ata) +{ + bool defer_tlb_flush =3D __stage2_unmap_defer_tlb_flush(unmap_data->pgt); + + /* + * Since __stage2_unmap_defer_tlb_flush() is based on alternative + * patching and the TLBIs' operations behavior depend on this, + * track if there's any change in the state during the unmap sequence. + */ + WARN_ON(unmap_data->defer_tlb_flush_init !=3D defer_tlb_flush); + return defer_tlb_flush; +} + +static void stage2_unmap_put_pte(const struct kvm_pgtable_visit_ctx *ctx, + struct kvm_s2_mmu *mmu, + struct kvm_pgtable_mm_ops *mm_ops) { + struct stage2_unmap_data *unmap_data =3D ctx->arg; + /* - * Clear the existing PTE, and perform break-before-make with - * TLB maintenance if it was valid. + * Clear the existing PTE, and perform break-before-make if it was + * valid. Depending on the system support, the TLB maintenance for + * the same can be deferred until the entire unmap is completed. */ if (kvm_pte_valid(ctx->old)) { kvm_clear_pte(ctx->ptep); - kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, ctx->addr, ctx->level); + + if (!stage2_unmap_defer_tlb_flush(unmap_data)) + kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, + ctx->addr, ctx->level); } =20 mm_ops->put_page(ctx->ptep); @@ -1005,7 +1043,8 @@ int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *= pgt, u64 addr, u64 size, static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx, enum kvm_pgtable_walk_flags visit) { - struct kvm_pgtable *pgt =3D ctx->arg; + struct stage2_unmap_data *unmap_data =3D ctx->arg; + struct kvm_pgtable *pgt =3D unmap_data->pgt; struct kvm_s2_mmu *mmu =3D pgt->mmu; struct kvm_pgtable_mm_ops *mm_ops =3D ctx->mm_ops; kvm_pte_t *childp =3D NULL; @@ -1033,7 +1072,7 @@ static int stage2_unmap_walker(const struct kvm_pgtab= le_visit_ctx *ctx, * block entry and rely on the remaining portions being faulted * back lazily. */ - stage2_put_pte(ctx, mmu, mm_ops); + stage2_unmap_put_pte(ctx, mmu, mm_ops); =20 if (need_flush && mm_ops->dcache_clean_inval_poc) mm_ops->dcache_clean_inval_poc(kvm_pte_follow(ctx->old, mm_ops), @@ -1047,13 +1086,23 @@ static int stage2_unmap_walker(const struct kvm_pgt= able_visit_ctx *ctx, =20 int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) { + int ret; + struct stage2_unmap_data unmap_data =3D { + .pgt =3D pgt, + .defer_tlb_flush_init =3D __stage2_unmap_defer_tlb_flush(pgt), + }; struct kvm_pgtable_walker walker =3D { .cb =3D stage2_unmap_walker, - .arg =3D pgt, + .arg =3D &unmap_data, .flags =3D KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, }; =20 - return kvm_pgtable_walk(pgt, addr, size, &walker); + ret =3D kvm_pgtable_walk(pgt, addr, size, &walker); + if (stage2_unmap_defer_tlb_flush(&unmap_data)) + /* Perform the deferred TLB invalidations */ + kvm_tlb_flush_vmid_range(pgt->mmu, addr, size); + + return ret; } =20 struct stage2_attr_data { --=20 2.41.0.rc0.172.g3f132b7071-goog