From nobody Thu Dec 18 07:58:38 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8C22C77B78 for ; Fri, 14 Apr 2023 17:29:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230338AbjDNR3k (ORCPT ); Fri, 14 Apr 2023 13:29:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51132 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230304AbjDNR3c (ORCPT ); Fri, 14 Apr 2023 13:29:32 -0400 Received: from mail-io1-xd4a.google.com (mail-io1-xd4a.google.com [IPv6:2607:f8b0:4864:20::d4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EE6D983CF for ; Fri, 14 Apr 2023 10:29:27 -0700 (PDT) Received: by mail-io1-xd4a.google.com with SMTP id t2-20020a6bc302000000b00760c588931aso817315iof.22 for ; Fri, 14 Apr 2023 10:29:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1681493367; x=1684085367; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=m1kZE+I4lCokEPTtuViylzrsjbyFZVriw/+IDTkdUZs=; b=wKAqTp0gCgCpvOk94vxoajpDy1Jwz98+EewdQmmSdhNqE2Jlk4VMvFAu5B8crUN4Dz h96GsoqGDRzHtHUkMg6TRiTh4WIycBwd9t3vPEADx97JkaEUt1YUmkVQjkw+TnbTXKKd AxmO1vjU0CzLrwsCbVvYNXNvP9uvDkbcfKCBKrGd88fzD+2Y2DA2uBNAejUQU8EQbRS5 Ak8gLtAu43PdrBZqXK+3Bt/JY6WS0m7v1Ig9rOXjI0tIgXNOEsAlE+nD3NTxcG+IL6Jr wRbed8j8LkgwlnnhSGHgtvO6fBEooaanIBc8nxPZ6QO8tgEdlEEw1bcTchIOEpnQVHC9 lgcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681493367; x=1684085367; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=m1kZE+I4lCokEPTtuViylzrsjbyFZVriw/+IDTkdUZs=; b=SQHJshbQc8Vncfyw40XYpN09Tuocx/JFpObv2niBZGgjdwGAP+magqOlJhTeR2lrCe uuQS4futgV3Y5hXbLhmi6cwY5ZHZPzRBFe6ID/oPNEPKNoS/u3a606B1qmlUc4TRnh7j 2Ao6EocO+Rgn95/yTZguuWsKS4zxM6h2C5dfOBFaiXipuGYmE0vSGHLFddWcziGFFo0r V72LinkfXSw8KNJCsXmIgktBsYCVH8PpmnbLnF0m9L9AW3B8fE9EqFhdjnKGG8x05XVz jFg2YzPH4WIeRI46N1V+77JpXyFwAXTv/rF6nbGuAgMnl5ctjGyygkinvLF87ALJbjaE 61pw== X-Gm-Message-State: AAQBX9fDjSYbEyygkW0NXdzi5tnUoCUjyAW8seu6GIfdZE1XweZm6d3i l2WY7DW4s6vLq1S0S9gFw570ualuvLu9 X-Google-Smtp-Source: AKy350Yddd2lwEeWhxMenGEg3cSJzJsTClfQ0/zSbWeiRrRDBV4gIVJaV1PJNPipYcDx7Jss85J4DCGudYVM X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a02:63ca:0:b0:40b:c01e:6d0f with SMTP id j193-20020a0263ca000000b0040bc01e6d0fmr2100031jac.2.1681493367400; Fri, 14 Apr 2023 10:29:27 -0700 (PDT) Date: Fri, 14 Apr 2023 17:29:16 +0000 In-Reply-To: <20230414172922.812640-1-rananta@google.com> Mime-Version: 1.0 References: <20230414172922.812640-1-rananta@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230414172922.812640-2-rananta@google.com> Subject: [PATCH v3 1/7] arm64: tlb: Refactor the core flush algorithm of __flush_tlb_range From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Ricardo Koller , Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently, the core TLB flush functionality of __flush_tlb_range() hardcodes vae1is (and variants) for the flush operation. In the upcoming patches, the KVM code reuses this core algorithm with ipas2e1is for range based TLB invalidations based on the IPA. Hence, extract the core flush functionality of __flush_tlb_range() into its own macro that accepts an 'op' argument to pass any TLBI operation, such that other callers (KVM) can benefit. No functional changes intended. Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Catalin Marinas --- arch/arm64/include/asm/tlbflush.h | 108 +++++++++++++++--------------- 1 file changed, 55 insertions(+), 53 deletions(-) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlb= flush.h index 412a3b9a3c25d..4775378b6da1b 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -278,14 +278,61 @@ static inline void flush_tlb_page(struct vm_area_stru= ct *vma, */ #define MAX_TLBI_OPS PTRS_PER_PTE =20 +/* When the CPU does not support TLB range operations, flush the TLB + * entries one by one at the granularity of 'stride'. If the TLB + * range ops are supported, then: + * + * 1. If 'pages' is odd, flush the first page through non-range + * operations; + * + * 2. For remaining pages: the minimum range granularity is decided + * by 'scale', so multiple range TLBI operations may be required. + * Start from scale =3D 0, flush the corresponding number of pages + * ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it + * until no pages left. + * + * Note that certain ranges can be represented by either num =3D 31 and + * scale or num =3D 0 and scale + 1. The loop below favours the latter + * since num is limited to 30 by the __TLBI_RANGE_NUM() macro. + */ +#define __flush_tlb_range_op(op, start, pages, stride, \ + asid, tlb_level, tlbi_user) do { \ + int num =3D 0; \ + int scale =3D 0; \ + unsigned long addr; \ + \ + while (pages > 0) { \ + if (!system_supports_tlb_range() || \ + pages % 2 =3D=3D 1) { \ + addr =3D __TLBI_VADDR(start, asid); \ + __tlbi_level(op, addr, tlb_level); \ + if (tlbi_user) \ + __tlbi_user_level(op, addr, tlb_level); \ + start +=3D stride; \ + pages -=3D stride >> PAGE_SHIFT; \ + continue; \ + } \ + \ + num =3D __TLBI_RANGE_NUM(pages, scale); \ + if (num >=3D 0) { \ + addr =3D __TLBI_VADDR_RANGE(start, asid, scale, \ + num, tlb_level); \ + __tlbi(r##op, addr); \ + if (tlbi_user) \ + __tlbi_user(r##op, addr); \ + start +=3D __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \ + pages -=3D __TLBI_RANGE_PAGES(num, scale); \ + } \ + scale++; \ + } \ +} while (0) + static inline void __flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end, unsigned long stride, bool last_level, int tlb_level) { - int num =3D 0; - int scale =3D 0; - unsigned long asid, addr, pages; + unsigned long asid, pages; =20 start =3D round_down(start, stride); end =3D round_up(end, stride); @@ -307,56 +354,11 @@ static inline void __flush_tlb_range(struct vm_area_s= truct *vma, dsb(ishst); asid =3D ASID(vma->vm_mm); =20 - /* - * When the CPU does not support TLB range operations, flush the TLB - * entries one by one at the granularity of 'stride'. If the TLB - * range ops are supported, then: - * - * 1. If 'pages' is odd, flush the first page through non-range - * operations; - * - * 2. For remaining pages: the minimum range granularity is decided - * by 'scale', so multiple range TLBI operations may be required. - * Start from scale =3D 0, flush the corresponding number of pages - * ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it - * until no pages left. - * - * Note that certain ranges can be represented by either num =3D 31 and - * scale or num =3D 0 and scale + 1. The loop below favours the latter - * since num is limited to 30 by the __TLBI_RANGE_NUM() macro. - */ - while (pages > 0) { - if (!system_supports_tlb_range() || - pages % 2 =3D=3D 1) { - addr =3D __TLBI_VADDR(start, asid); - if (last_level) { - __tlbi_level(vale1is, addr, tlb_level); - __tlbi_user_level(vale1is, addr, tlb_level); - } else { - __tlbi_level(vae1is, addr, tlb_level); - __tlbi_user_level(vae1is, addr, tlb_level); - } - start +=3D stride; - pages -=3D stride >> PAGE_SHIFT; - continue; - } - - num =3D __TLBI_RANGE_NUM(pages, scale); - if (num >=3D 0) { - addr =3D __TLBI_VADDR_RANGE(start, asid, scale, - num, tlb_level); - if (last_level) { - __tlbi(rvale1is, addr); - __tlbi_user(rvale1is, addr); - } else { - __tlbi(rvae1is, addr); - __tlbi_user(rvae1is, addr); - } - start +=3D __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; - pages -=3D __TLBI_RANGE_PAGES(num, scale); - } - scale++; - } + if (last_level) + __flush_tlb_range_op(vale1is, start, pages, stride, asid, tlb_level, tru= e); + else + __flush_tlb_range_op(vae1is, start, pages, stride, asid, tlb_level, true= ); + dsb(ish); } =20 --=20 2.40.0.634.g4ca3ef3211-goog From nobody Thu Dec 18 07:58:38 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E877FC77B71 for ; Fri, 14 Apr 2023 17:29:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230188AbjDNR3o (ORCPT ); Fri, 14 Apr 2023 13:29:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51174 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230359AbjDNR3e (ORCPT ); Fri, 14 Apr 2023 13:29:34 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0150465B3 for ; Fri, 14 Apr 2023 10:29:28 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-54ee1fd7876so197365017b3.23 for ; Fri, 14 Apr 2023 10:29:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1681493368; x=1684085368; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=6yXpQV1dyaIZNXVMPwYu1hFdV6Rbra8P0PHGUh6odQY=; b=BH+iDovlEdPlXeqFnQAwC3i/GbldXsEXVPPEleNqyMhYfztj3mdzkuLlpAM8js5DUN 4U4peoDsgxG17FkNzwtjyxSL3WK1rPGjVyS5QvlEVrjLnJgBjRFxmmeYv457CmnZ8KDJ 4mVIPXKqkjuU6lq6hbX+u7l+2lD2htHCU+DPKyG8N5n76kYehutWHFguGNaE0jRxQIg/ LNATsfmPe+mliFpE/lIwk8FNYS5HccutpQOfwg0Re4I1wNzbCBkZh3zsds829lIrEx08 dSdWCtgy7SccBWpxP6luF5aSag3jFs0zKj7ece2j/n4S47P7iOsVYZp3PKXG+V+EVjI2 dZHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681493368; x=1684085368; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=6yXpQV1dyaIZNXVMPwYu1hFdV6Rbra8P0PHGUh6odQY=; b=jvEnsmtxHzAkAAs4cVfFWOOiaoVLIcMIF0rCa45JthNLiAo9y8264GNHMlr7XQ8MxE 18QYYoUYY3O4NhP9NOD5PRCOryDNapT+gaBgJows5K4sRKWhaQ59ZbOwN5ZkUDtjwt0L gclFI9o6aWYReExM9FjWweiF40QgsYqreS6G0PVV8Vw6s1733pk+Osp4sl5aEDLe0bNc T/6zOAwSAAATX/XtqQ1ErOL/iZ2FomAxeCPvcPUOwvuwYeV3Hdn+ocGMV3EEzpTt9ocK zRoA7a6quPHkogwoamRY+KcV0bDKrCz8Ck6/yBbKysRTmot1cWv41UAg3Py/bxZfZu1j AAxQ== X-Gm-Message-State: AAQBX9dXWlSCP16evtZEQFyvfsb+MSSe2EV1W5tAw67xO7waB58zNmXn F5/MI6hpT1p/KMk2UnTgPcsNBuXYElAF X-Google-Smtp-Source: AKy350aJJ9Vbsc0RGRyarCzXp0LRIHsj2K5ceufsNwzluZRRFidFLTDVynELZbpYeTnaEMbB4S53iorRdTV9 X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a81:ed06:0:b0:541:693f:cdd1 with SMTP id k6-20020a81ed06000000b00541693fcdd1mr4258848ywm.9.1681493368275; Fri, 14 Apr 2023 10:29:28 -0700 (PDT) Date: Fri, 14 Apr 2023 17:29:17 +0000 In-Reply-To: <20230414172922.812640-1-rananta@google.com> Mime-Version: 1.0 References: <20230414172922.812640-1-rananta@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230414172922.812640-3-rananta@google.com> Subject: [PATCH v3 2/7] KVM: arm64: Implement __kvm_tlb_flush_vmid_range() From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Ricardo Koller , Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Define __kvm_tlb_flush_vmid_range() (for VHE and nVHE) to flush a range of stage-2 page-tables using IPA in one go. If the system supports FEAT_TLBIRANGE, the following patches would conviniently replace global TLBI such as vmalls12e1is in the map, unmap, and dirty-logging paths with ripas2e1is instead. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/include/asm/kvm_asm.h | 3 +++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 11 +++++++++ arch/arm64/kvm/hyp/nvhe/tlb.c | 39 ++++++++++++++++++++++++++++++ arch/arm64/kvm/hyp/vhe/tlb.c | 35 +++++++++++++++++++++++++++ 4 files changed, 88 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_= asm.h index 43c3bc0f9544d..33352d9399e32 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -79,6 +79,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_init_vm, __KVM_HOST_SMCCC_FUNC___pkvm_init_vcpu, __KVM_HOST_SMCCC_FUNC___pkvm_teardown_vm, + __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_range, }; =20 #define DECLARE_KVM_VHE_SYM(sym) extern char sym[] @@ -225,6 +226,8 @@ extern void __kvm_flush_vm_context(void); extern void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu); extern void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t i= pa, int level); +extern void __kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t start, phys_addr_t end); extern void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu); =20 extern void __kvm_timer_set_cntvoff(u64 cntvoff); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 728e01d4536b0..81d30737dc7c9 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -125,6 +125,16 @@ static void handle___kvm_tlb_flush_vmid_ipa(struct kvm= _cpu_context *host_ctxt) __kvm_tlb_flush_vmid_ipa(kern_hyp_va(mmu), ipa, level); } =20 +static void +handle___kvm_tlb_flush_vmid_range(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); + DECLARE_REG(phys_addr_t, start, host_ctxt, 2); + DECLARE_REG(phys_addr_t, end, host_ctxt, 3); + + __kvm_tlb_flush_vmid_range(kern_hyp_va(mmu), start, end); +} + static void handle___kvm_tlb_flush_vmid(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); @@ -315,6 +325,7 @@ static const hcall_t host_hcall[] =3D { HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa), + HANDLE_FUNC(__kvm_tlb_flush_vmid_range), HANDLE_FUNC(__kvm_tlb_flush_vmid), HANDLE_FUNC(__kvm_flush_cpu_context), HANDLE_FUNC(__kvm_timer_set_cntvoff), diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c index d296d617f5896..d2504df9d38b6 100644 --- a/arch/arm64/kvm/hyp/nvhe/tlb.c +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c @@ -109,6 +109,45 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, __tlb_switch_to_host(&cxt); } =20 +void __kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t start, phys_addr_t end) +{ + struct tlb_inv_context cxt; + unsigned long pages, stride; + + /* + * Since the range of addresses may not be mapped at + * the same level, assume the worst case as PAGE_SIZE + */ + stride =3D PAGE_SIZE; + start =3D round_down(start, stride); + end =3D round_up(end, stride); + pages =3D (end - start) >> PAGE_SHIFT; + + if (!system_supports_tlb_range() || pages >=3D MAX_TLBI_RANGE_PAGES) { + __kvm_tlb_flush_vmid(mmu); + return; + } + + dsb(ishst); + + /* Switch to requested VMID */ + __tlb_switch_to_guest(mmu, &cxt); + + __flush_tlb_range_op(ipas2e1is, start, pages, stride, 0, 0, false); + + dsb(ish); + __tlbi(vmalle1is); + dsb(ish); + isb(); + + /* See the comment below in __kvm_tlb_flush_vmid_ipa() */ + if (icache_is_vpipt()) + icache_inval_all_pou(); + + __tlb_switch_to_host(&cxt); +} + void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) { struct tlb_inv_context cxt; diff --git a/arch/arm64/kvm/hyp/vhe/tlb.c b/arch/arm64/kvm/hyp/vhe/tlb.c index 24cef9b87f9e9..f34d6dd9e4674 100644 --- a/arch/arm64/kvm/hyp/vhe/tlb.c +++ b/arch/arm64/kvm/hyp/vhe/tlb.c @@ -111,6 +111,41 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, __tlb_switch_to_host(&cxt); } =20 +void __kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t start, phys_addr_t end) +{ + struct tlb_inv_context cxt; + unsigned long pages, stride; + + /* + * Since the range of addresses may not be mapped at + * the same level, assume the worst case as PAGE_SIZE + */ + stride =3D PAGE_SIZE; + start =3D round_down(start, stride); + end =3D round_up(end, stride); + pages =3D (end - start) >> PAGE_SHIFT; + + if (!system_supports_tlb_range() || pages >=3D MAX_TLBI_RANGE_PAGES) { + __kvm_tlb_flush_vmid(mmu); + return; + } + + dsb(ishst); + + /* Switch to requested VMID */ + __tlb_switch_to_guest(mmu, &cxt); + + __flush_tlb_range_op(ipas2e1is, start, pages, stride, 0, 0, false); + + dsb(ish); + __tlbi(vmalle1is); + dsb(ish); + isb(); + + __tlb_switch_to_host(&cxt); +} + void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) { struct tlb_inv_context cxt; --=20 2.40.0.634.g4ca3ef3211-goog From nobody Thu Dec 18 07:58:38 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E217DC77B71 for ; Fri, 14 Apr 2023 17:29:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229636AbjDNR3r (ORCPT ); Fri, 14 Apr 2023 13:29:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51174 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230284AbjDNR3f (ORCPT ); Fri, 14 Apr 2023 13:29:35 -0400 Received: from mail-il1-x149.google.com (mail-il1-x149.google.com [IPv6:2607:f8b0:4864:20::149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C087F6E8F for ; Fri, 14 Apr 2023 10:29:29 -0700 (PDT) Received: by mail-il1-x149.google.com with SMTP id n14-20020a056e02140e00b003259a56715bso10400194ilo.15 for ; Fri, 14 Apr 2023 10:29:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1681493369; x=1684085369; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Xod2ygDVduTwUHImrHGaOn1Ey3T4FHeje26KGvtdzFY=; b=urihsx8BJ4/z847iQ0af0/ABhs7C6WBtcCWjGJXbhxr/VPQvARnb7XaJ7A5oFQiHCD 90urhiaRyXF7oVdI+4/fu/OPC5OJvxewZGz9QxN5hZ5yTVEjUtFiVeYjgKwW9nNFK1ru ce1L+PFnnDRNFsRRRIrGtENwVQTm3GxN0xCbR3syOuv9KLkEd+6jW5XLxUGt1ZHmyXnR HgnKGTeToQwFaAXYXVQwBhn++Abwiz3alwNHfKA9Vz6CpdqXz9JFA+VXvh7qQ0Qk8Eho hnqoSXzs/4uE8yim0YCcb5e3OEyL4Tz/6RTOh3sL0WbaHGriWM3BXETHdZiXgWQWsmI6 Kx6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681493369; x=1684085369; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Xod2ygDVduTwUHImrHGaOn1Ey3T4FHeje26KGvtdzFY=; b=Xn6fJm0EDmE9ieRheeqCitF3VdhNsAkSvWkJ9MtLvBc9F5hx5nVHRvV0B4QGD1dZ2B shWAZ0mm3HrF3+AXpY7/mxIYBRJ+3il3UM3w1Sfnj2cGxPqEbMrf7gNIkCYZt7QJSlwo 1MU2+6OhAzjlr3y7/qhN48CXBDWwZ+FkyIUlRGdletvsJxq3K03I9TLdgccmbttg15wg IKI8CLjknrRy/KNK3nqqPdVxb3Kk1HKgO5sFxs0Seyni3Zj5Mlai0GR1Bu/g/RNh7JGC 187LuiwW3qLjK2tVzLkTkJSNiLrnwguOZtpDYS+RhjHpreItZAoCtMvC5Bxbq+/zm4sz QZTw== X-Gm-Message-State: AAQBX9cSH52pE8vzIVRAREn9ksa6jkT7kZZqXiIB5TCAmUdnIQ5/1GFh 3al8rWAGJHbVKQCzKSBrOIeEGsGidXou X-Google-Smtp-Source: AKy350aPSA8VnmqDZpOyrrIV/qNFr7vvNcijupLw4tbGJajccFhHKKFJcef90ZPOEeNPVI0c8hECYDUjbfor X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a02:b19d:0:b0:40f:787a:852f with SMTP id t29-20020a02b19d000000b0040f787a852fmr1494204jah.6.1681493369209; Fri, 14 Apr 2023 10:29:29 -0700 (PDT) Date: Fri, 14 Apr 2023 17:29:18 +0000 In-Reply-To: <20230414172922.812640-1-rananta@google.com> Mime-Version: 1.0 References: <20230414172922.812640-1-rananta@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230414172922.812640-4-rananta@google.com> Subject: [PATCH v3 3/7] KVM: arm64: Implement kvm_arch_flush_remote_tlbs_range() From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Ricardo Koller , Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Implement kvm_arch_flush_remote_tlbs_range() for arm64 to invalidate the given range in the TLB. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/include/asm/kvm_host.h | 3 +++ arch/arm64/kvm/mmu.c | 11 +++++++++++ 2 files changed, 14 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 17c215a2df7d7..075d3e6482e53 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1044,6 +1044,9 @@ struct kvm *kvm_arch_alloc_vm(void); #define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS int kvm_arch_flush_remote_tlbs(struct kvm *kvm); =20 +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64= pages); + static inline bool kvm_vm_is_protected(struct kvm *kvm) { return false; diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index d0a0d3dca9316..e3673b4c10292 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -92,6 +92,17 @@ int kvm_arch_flush_remote_tlbs(struct kvm *kvm) return 0; } =20 +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64= pages) +{ + phys_addr_t start, end; + + start =3D start_gfn << PAGE_SHIFT; + end =3D (start_gfn + pages) << PAGE_SHIFT; + + kvm_call_hyp(__kvm_tlb_flush_vmid_range, &kvm->arch.mmu, start, end); + return 0; +} + static bool kvm_is_device_pfn(unsigned long pfn) { return !pfn_is_map_memory(pfn); --=20 2.40.0.634.g4ca3ef3211-goog From nobody Thu Dec 18 07:58:38 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F1AAC77B71 for ; Fri, 14 Apr 2023 17:30:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230466AbjDNR3w (ORCPT ); Fri, 14 Apr 2023 13:29:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51174 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230371AbjDNR3g (ORCPT ); Fri, 14 Apr 2023 13:29:36 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A276D5BB0 for ; Fri, 14 Apr 2023 10:29:30 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id t66-20020a254645000000b00b74680a7904so19777333yba.15 for ; Fri, 14 Apr 2023 10:29:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1681493370; x=1684085370; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=KPViUvkFxC3TgEpc/g+CHYNB1CL2Y9j6aLSEKzBNRU4=; b=bfUCUQuKiETTWlzQAbfjNWEznPFMMabCQwMyvFryGC5wRD7C7JsvkTJRpUdEZwuD3d EmNvtIVJ1PzWXZp48dtSC3Uqt+Qd7L6LI6SYR2R18kQn4tOvoKZvDTWk28yQdgjV/t+g aEQHdU35HFceLmkmPdsoDUlmW9uweT8xG74XVcOTweM7F/e4vvyBkpSw2ZfUszXVuX67 16Y20IFlGTWB08vhSSrtkSoRM6c/HGCuncpoLyYgPWweOItK5ELHtnPUFzjbLrNEYcNm uWOezjyZmaPewp35SfIf/iN4YAyxULNabW1cY5mFVAaVM+373vhmIlEchR/iiRJYdrBd +lIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681493370; x=1684085370; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KPViUvkFxC3TgEpc/g+CHYNB1CL2Y9j6aLSEKzBNRU4=; b=Npds7Kbseb3BAcg3yIgvwvXqXliK5LungbByN0spXSmrQOmm/WKE5wSIxwQAEuQGae 2c5KoLva6+iRsM24uM3hPku19+Bcp3WP7umI4y/vL8uSaxAT6w/mrC8QcPPsS5ovNfn8 KcQmbiNRZBNVaAlV9Z3M5xmLUy9PKodU7EeqlQ7gsvE/dMr+1ffh3DtjkQEbR2+M7LIA WtleIhOj4yLqjmkcZEOew2FIVv8lzz6pUmzSb8x9MCrDl4cCf0WD14cZMrXLuEUObh2H 0HvzNEBSJpNK6ZGEvqwbsnJ4XSTW828PwYcL35nMQkJKiOsiVSBjR24L2skb7p9KlJ3P sh5A== X-Gm-Message-State: AAQBX9e5oFhSlxSyQP7c7xr8NJ57xmKUqg3COIZenpkBOKiMgDQRUeZe 0EelUh3edqCdqB0jPRMWqBHWXUU/Nb2G X-Google-Smtp-Source: AKy350aLcIkj/hYVjZbP05g2b5z8MsRd71+scXvLOlNSX4p4BB89Fk+7FtQuUnznnwYfU5yATegzNaTrdaKg X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a81:c406:0:b0:54d:913b:c9e8 with SMTP id j6-20020a81c406000000b0054d913bc9e8mr3919984ywi.5.1681493369929; Fri, 14 Apr 2023 10:29:29 -0700 (PDT) Date: Fri, 14 Apr 2023 17:29:19 +0000 In-Reply-To: <20230414172922.812640-1-rananta@google.com> Mime-Version: 1.0 References: <20230414172922.812640-1-rananta@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230414172922.812640-5-rananta@google.com> Subject: [PATCH v3 4/7] KVM: arm64: Flush only the memslot after write-protect From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Ricardo Koller , Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" After write-protecting the region, currently KVM invalidates the entire TLB entries using kvm_flush_remote_tlbs(). Instead, scope the invalidation only to the targeted memslot. If supported, the architecture would use the range-based TLBI instructions to flush the memslot or else fallback to flushing all of the TLBs. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/kvm/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index e3673b4c10292..2ea6eb4ea763e 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -996,7 +996,7 @@ static void kvm_mmu_wp_memory_region(struct kvm *kvm, i= nt slot) write_lock(&kvm->mmu_lock); stage2_wp_range(&kvm->arch.mmu, start, end); write_unlock(&kvm->mmu_lock); - kvm_flush_remote_tlbs(kvm); + kvm_flush_remote_tlbs_memslot(kvm, memslot); } =20 /** --=20 2.40.0.634.g4ca3ef3211-goog From nobody Thu Dec 18 07:58:38 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA270C77B78 for ; Fri, 14 Apr 2023 17:29:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231150AbjDNR3y (ORCPT ); Fri, 14 Apr 2023 13:29:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51296 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230377AbjDNR3h (ORCPT ); Fri, 14 Apr 2023 13:29:37 -0400 Received: from mail-io1-xd4a.google.com (mail-io1-xd4a.google.com [IPv6:2607:f8b0:4864:20::d4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8087B7A85 for ; Fri, 14 Apr 2023 10:29:31 -0700 (PDT) Received: by mail-io1-xd4a.google.com with SMTP id 199-20020a6b14d0000000b00760ac8c52cdso3239146iou.18 for ; Fri, 14 Apr 2023 10:29:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1681493371; x=1684085371; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=oKp1uqEVUDsYvqb7n0EE6E39VgTURLiqq4SIcHauWnY=; b=A4ww0TsBpNc8NyuHHycocTyIKbFT6p62k0/A8t5/ltX9KxG0M7SK44yK6ry6Z2mMF+ jeqV8N1OxvCVX9VRDGFZjgankBqNIL3n4bQYbPNlur0Gt+i5wrEcpZtd0hABAsHIgk0I QQYcuZsYerlzIHJ/z/LzsZc/GTmGp4M7XXAI8JUUuj3h736xuhABjmEb6C0gPmwG/Lbv tnshnBhAnysXSf0cOeG/AmMed5U4ze4xTJmfqAReYTCUtmv8k1LurjOJXsjR04evZfNt 855K7678/DIciNdxe2RQK1JEWX6ua6QX1paK8Ve8Ve8btXIDgE4M5fLPABa2fB1l2v8h VeTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681493371; x=1684085371; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=oKp1uqEVUDsYvqb7n0EE6E39VgTURLiqq4SIcHauWnY=; b=lrA1EDsLVilAklVCjIn/41TQ90Pd07Gdeua6yH7IkviV8sVJZGyXR8XswSXiygDufD QueP/+JQVhTjkJtZSTLLjhERbNzPJWSH5uQ7DDdfwfc5sGQiiQEQYD9I6+2MY42A0W4H jxo1K4WRKIIWW/7D1uGnFXWGcLfskTsDta8Ly7/Yn0NHXyTl5NmB2B0KMca8CjjnD8BB wsOJfFEDj90VxH58+nlJrtJJDWDyMRT37TH1Madw0YeAHgMm9BlwbndCwcCzOa2gz31k eq7lFL/qUlpV65dpNLQpNDKpfFnzyCeNcrdTmv7POpUBxdbEZ5AWm18GpJ4dc12+Hs4z pavQ== X-Gm-Message-State: AAQBX9c3cByfXB2KIMVgrsBygfOAvkbOKxmitV/DdE4XpI4D5yprxl9G Be/cMxNHsnXCpHZyKPl0XqQrDhkqj7UT X-Google-Smtp-Source: AKy350YdGpBiZBP25Vx/4WpYHfw47rqMyEvx05N69DBsQ4xqQZ/W+6ShMamGjXosvj+ntNMXpPu5TKlDemlu X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a6b:e60c:0:b0:758:b52c:ec8b with SMTP id g12-20020a6be60c000000b00758b52cec8bmr2589760ioh.3.1681493370852; Fri, 14 Apr 2023 10:29:30 -0700 (PDT) Date: Fri, 14 Apr 2023 17:29:20 +0000 In-Reply-To: <20230414172922.812640-1-rananta@google.com> Mime-Version: 1.0 References: <20230414172922.812640-1-rananta@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230414172922.812640-6-rananta@google.com> Subject: [PATCH v3 5/7] KVM: arm64: Invalidate the table entries upon a range From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Ricardo Koller , Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently, during the operations such as a hugepage collapse, KVM would flush the entire VM's context using 'vmalls12e1is' TLBI operation. Specifically, if the VM is faulting on many hugepages (say after dirty-logging), it creates a performance penalty for the guest whose pages have already been faulted earlier as they would have to refill their TLBs again. Instead, call __kvm_tlb_flush_vmid_range() for table entries. If the system supports it, only the required range will be flushed. Else, it'll fallback to the previous mechanism. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/kvm/hyp/pgtable.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 3d61bd3e591d2..b8f0dbd12f773 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -745,10 +745,13 @@ static bool stage2_try_break_pte(const struct kvm_pgt= able_visit_ctx *ctx, * Perform the appropriate TLB invalidation based on the evicted pte * value (if any). */ - if (kvm_pte_table(ctx->old, ctx->level)) - kvm_call_hyp(__kvm_tlb_flush_vmid, mmu); - else if (kvm_pte_valid(ctx->old)) + if (kvm_pte_table(ctx->old, ctx->level)) { + u64 end =3D ctx->addr + kvm_granule_size(ctx->level); + + kvm_call_hyp(__kvm_tlb_flush_vmid_range, mmu, ctx->addr, end); + } else if (kvm_pte_valid(ctx->old)) { kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, ctx->addr, ctx->level); + } =20 if (stage2_pte_is_counted(ctx->old)) mm_ops->put_page(ctx->ptep); --=20 2.40.0.634.g4ca3ef3211-goog From nobody Thu Dec 18 07:58:38 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3258C77B6E for ; Fri, 14 Apr 2023 17:30:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230017AbjDNRaD (ORCPT ); Fri, 14 Apr 2023 13:30:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51256 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230397AbjDNR3m (ORCPT ); Fri, 14 Apr 2023 13:29:42 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7DF558A65 for ; Fri, 14 Apr 2023 10:29:32 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-54c17fb245dso249229837b3.21 for ; Fri, 14 Apr 2023 10:29:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1681493371; x=1684085371; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=k/QcOk48IC1/WqT95Uij/puIMmQjCMEPr+3tO3HuwfQ=; b=QWo1etgNuCpfarKk0sMjzQs753lB5TZr5/dnk4kDZ9dJPaUJEmNY3uILq7HPKvclaQ eOaY0c84jzyuPtkG8WIs8wNNj4dICRvoSUA3jdsan+4OW4kEjfaDE2m7pI9ViIoSnWJB ZEuzO0GtCVYYyoXhaLH30dGXOiQI6gnx5uDxm7h5j7oyVv5zhpVts6Jiy7MLwkHgghpU SsMjEqgOL/Wzfe9PrnwFfVsVkxk8L+0IPhBMd+B5Rc44u+EK7i109xafD6IUh3tWfIFp zJbiyMqBbmQLFjVyJVoTA/ei6xFOZmIB/Zpb7Xob7ZA/TtTIwFZvMlrTUJgd8QJ/FP3X Oq9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681493371; x=1684085371; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=k/QcOk48IC1/WqT95Uij/puIMmQjCMEPr+3tO3HuwfQ=; b=YNwdWTidCAsSGEJZ2JdIkObivzosmJXCaufwpU+eBAu2rVqXMJ5KGcORPL4USvtreW rCLsej63RhRvFqSSUiqfRkc1VzlSC8RUFMxrUMW1m2gLs2OgTYnCfBW50pWnXFkg1mPv pmxw1Nz0NAybcG+xmnBKYD64mIj0znxRX7oYNMsUkQB4wO23BbLxrVgTwzPakYpyWfqY dnEEjPStmMj1zLDkeazeTTx6/ppw4hDC91KEC8CBfFtuiyLzu+vmen0WDYY7UE/jewGi 92Qo4qu16mntCI20uwaMEfnM+smSsfGMYYZV78KzJ8D5aaU7I9+AsQs/hsIePOEu/inI 3sMg== X-Gm-Message-State: AAQBX9cZH8ZeWXvVz5XXQwvFqv6QR2TxXIf1hxUF3MeM4rXL/9RhNz1o plA3bDsmv5FRLceOI6ipxXxcKhISAVig X-Google-Smtp-Source: AKy350aKTf8FVFWFUEuIlPbsBl3FnkxxsJzsWFEzqHW9OeHTbS7Yyx6LF4uwJ6X2m+BIEmm4mo5HYIIjXnbX X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a25:cbd4:0:b0:b78:4788:525b with SMTP id b203-20020a25cbd4000000b00b784788525bmr4175103ybg.0.1681493371735; Fri, 14 Apr 2023 10:29:31 -0700 (PDT) Date: Fri, 14 Apr 2023 17:29:21 +0000 In-Reply-To: <20230414172922.812640-1-rananta@google.com> Mime-Version: 1.0 References: <20230414172922.812640-1-rananta@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230414172922.812640-7-rananta@google.com> Subject: [PATCH v3 6/7] KVM: arm64: Add 'skip_flush' arg to stage2_put_pte() From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Ricardo Koller , Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a 'skip_flush' argument in stage2_put_pte() to control the TLB invalidations. This will be leveraged by the upcoming patch to defer the individual PTE invalidations until the entire walk is finished. No functional change intended. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/kvm/hyp/pgtable.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index b8f0dbd12f773..3f136e35feb5e 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -772,7 +772,7 @@ static void stage2_make_pte(const struct kvm_pgtable_vi= sit_ctx *ctx, kvm_pte_t n } =20 static void stage2_put_pte(const struct kvm_pgtable_visit_ctx *ctx, struct= kvm_s2_mmu *mmu, - struct kvm_pgtable_mm_ops *mm_ops) + struct kvm_pgtable_mm_ops *mm_ops, bool skip_flush) { /* * Clear the existing PTE, and perform break-before-make with @@ -780,7 +780,10 @@ static void stage2_put_pte(const struct kvm_pgtable_vi= sit_ctx *ctx, struct kvm_s */ if (kvm_pte_valid(ctx->old)) { kvm_clear_pte(ctx->ptep); - kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, ctx->addr, ctx->level); + + if (!skip_flush) + kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, + ctx->addr, ctx->level); } =20 mm_ops->put_page(ctx->ptep); @@ -1015,7 +1018,7 @@ static int stage2_unmap_walker(const struct kvm_pgtab= le_visit_ctx *ctx, * block entry and rely on the remaining portions being faulted * back lazily. */ - stage2_put_pte(ctx, mmu, mm_ops); + stage2_put_pte(ctx, mmu, mm_ops, false); =20 if (need_flush && mm_ops->dcache_clean_inval_poc) mm_ops->dcache_clean_inval_poc(kvm_pte_follow(ctx->old, mm_ops), --=20 2.40.0.634.g4ca3ef3211-goog From nobody Thu Dec 18 07:58:38 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4073C77B6E for ; Fri, 14 Apr 2023 17:30:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230431AbjDNRaF (ORCPT ); Fri, 14 Apr 2023 13:30:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51296 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230413AbjDNR3r (ORCPT ); Fri, 14 Apr 2023 13:29:47 -0400 Received: from mail-io1-xd4a.google.com (mail-io1-xd4a.google.com [IPv6:2607:f8b0:4864:20::d4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3661383CF for ; Fri, 14 Apr 2023 10:29:33 -0700 (PDT) Received: by mail-io1-xd4a.google.com with SMTP id r14-20020a5e950e000000b0074cc9aba965so9862252ioj.11 for ; Fri, 14 Apr 2023 10:29:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1681493372; x=1684085372; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=B/zMNPvsJ76QjkM1FVDxX2x1kCiWFrXUv8piHTPNmKE=; b=Nx6yKI3EIRZ1j9gTwptzX9+uokMoja8+/mQJmU3Rbml7atU0NoaDLBGoC2ADYzVIzd 7aha5aGITbsGDsKM1l7NfS0DEzKHMH2NeeQayQA6VslRgzCarBgdYEG+PUEtCAER1UXf fyrUxrpkNjxH5oSb49dS7Rfk+7CWIo2gqojXOL8s9xSwDoxq+npACrZokCS2ovl6fOzW 8l09mW/ccEk4oAdZg2MnlEm5WTsX7ut19ZEDWM18wpQZ6t2WzsLVdUB3PkYURffuJxRY g+aUVAE5+Yw98jqRcnug+ZTFUE+1aDOhSwxOp+NtIoyMuMlVf2eoWotUhma2O9uNKFW9 +UAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681493372; x=1684085372; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=B/zMNPvsJ76QjkM1FVDxX2x1kCiWFrXUv8piHTPNmKE=; b=jl1d9muijtDChmqSqCroRouls7VWAvl7zbwAtB3c9Kj5Jz56ZMoTY+RqH1Tamg+vOv lyR67/hMVkAxCAEdZwXlhyF3KB6Zu0x5b9tITFWg58aZ6j+Fgu3cdPsSgGgKSBXe4dNY UQBvSeP1X2+1bY7v28MYomtZc++R4xPEoV4BaOj+IUbYktrlcnsF2dWyDJeN/br9MuHH liX5BzGMwtNEY1Ql2sn6VHz7IZTkRGSpenvv/s4sV3LrVQqzhlSmNLrjtoHhKhrr04lu 9hSsEEYv7Y+98LNA4Y7cDAD9jUs3e/2IwNEMT7v1Ci0dWdKequC21nhcsB+ERfP+Dzi4 dZDw== X-Gm-Message-State: AAQBX9dwwmYRefk4s180/+ZlXAnDDwUzcfhrIdEkBFHY3fqd76wWAVmd BAmBAyCXmqLvbt1xeL5g3Q0GBHBMo5Dj X-Google-Smtp-Source: AKy350Z4dUZKdVtn0ngicjL45vSsU+MnN2dqL/cSY7ktGGKCk03DL47Ioacf0/CM1+w6Tkn33IoibTV1wujW X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a02:85ed:0:b0:40b:bd17:3c31 with SMTP id d100-20020a0285ed000000b0040bbd173c31mr2482742jai.0.1681493372447; Fri, 14 Apr 2023 10:29:32 -0700 (PDT) Date: Fri, 14 Apr 2023 17:29:22 +0000 In-Reply-To: <20230414172922.812640-1-rananta@google.com> Mime-Version: 1.0 References: <20230414172922.812640-1-rananta@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230414172922.812640-8-rananta@google.com> Subject: [PATCH v3 7/7] KVM: arm64: Use TLBI range-based intructions for unmap From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Ricardo Koller , Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The current implementation of the stage-2 unmap walker traverses the given range and, as a part of break-before-make, performs TLB invalidations with a DSB for every PTE. A multitude of this combination could cause a performance bottleneck. Hence, if the system supports FEAT_TLBIRANGE, defer the TLB invalidations until the entire walk is finished, and then use range-based instructions to invalidate the TLBs in one go. Condition this upon S2FWB in order to avoid walking the page-table again to perform the CMOs after issuing the TLBI. Signed-off-by: Raghavendra Rao Ananta Suggested-by: Oliver Upton --- arch/arm64/kvm/hyp/pgtable.c | 33 +++++++++++++++++++++++++++++---- 1 file changed, 29 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 3f136e35feb5e..bcb748e3566c7 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -987,10 +987,16 @@ int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *= pgt, u64 addr, u64 size, return ret; } =20 +struct stage2_unmap_data { + struct kvm_pgtable *pgt; + bool skip_pte_tlbis; +}; + static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx, enum kvm_pgtable_walk_flags visit) { - struct kvm_pgtable *pgt =3D ctx->arg; + struct stage2_unmap_data *unmap_data =3D ctx->arg; + struct kvm_pgtable *pgt =3D unmap_data->pgt; struct kvm_s2_mmu *mmu =3D pgt->mmu; struct kvm_pgtable_mm_ops *mm_ops =3D ctx->mm_ops; kvm_pte_t *childp =3D NULL; @@ -1018,7 +1024,7 @@ static int stage2_unmap_walker(const struct kvm_pgtab= le_visit_ctx *ctx, * block entry and rely on the remaining portions being faulted * back lazily. */ - stage2_put_pte(ctx, mmu, mm_ops, false); + stage2_put_pte(ctx, mmu, mm_ops, unmap_data->skip_pte_tlbis); =20 if (need_flush && mm_ops->dcache_clean_inval_poc) mm_ops->dcache_clean_inval_poc(kvm_pte_follow(ctx->old, mm_ops), @@ -1032,13 +1038,32 @@ static int stage2_unmap_walker(const struct kvm_pgt= able_visit_ctx *ctx, =20 int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) { + int ret; + struct stage2_unmap_data unmap_data =3D { + .pgt =3D pgt, + /* + * If FEAT_TLBIRANGE is implemented, defer the individial PTE + * TLB invalidations until the entire walk is finished, and + * then use the range-based TLBI instructions to do the + * invalidations. Condition this upon S2FWB in order to avoid + * a page-table walk again to perform the CMOs after TLBI. + */ + .skip_pte_tlbis =3D system_supports_tlb_range() && + stage2_has_fwb(pgt), + }; struct kvm_pgtable_walker walker =3D { .cb =3D stage2_unmap_walker, - .arg =3D pgt, + .arg =3D &unmap_data, .flags =3D KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, }; =20 - return kvm_pgtable_walk(pgt, addr, size, &walker); + ret =3D kvm_pgtable_walk(pgt, addr, size, &walker); + if (unmap_data.skip_pte_tlbis) + /* Perform the deferred TLB invalidations */ + kvm_call_hyp(__kvm_tlb_flush_vmid_range, pgt->mmu, + addr, addr + size); + + return ret; } =20 struct stage2_attr_data { --=20 2.40.0.634.g4ca3ef3211-goog