From nobody Thu Dec 18 18:51:18 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4073C77B6E for ; Fri, 14 Apr 2023 17:30:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230431AbjDNRaF (ORCPT ); Fri, 14 Apr 2023 13:30:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51296 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230413AbjDNR3r (ORCPT ); Fri, 14 Apr 2023 13:29:47 -0400 Received: from mail-io1-xd4a.google.com (mail-io1-xd4a.google.com [IPv6:2607:f8b0:4864:20::d4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3661383CF for ; Fri, 14 Apr 2023 10:29:33 -0700 (PDT) Received: by mail-io1-xd4a.google.com with SMTP id r14-20020a5e950e000000b0074cc9aba965so9862252ioj.11 for ; Fri, 14 Apr 2023 10:29:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1681493372; x=1684085372; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=B/zMNPvsJ76QjkM1FVDxX2x1kCiWFrXUv8piHTPNmKE=; b=Nx6yKI3EIRZ1j9gTwptzX9+uokMoja8+/mQJmU3Rbml7atU0NoaDLBGoC2ADYzVIzd 7aha5aGITbsGDsKM1l7NfS0DEzKHMH2NeeQayQA6VslRgzCarBgdYEG+PUEtCAER1UXf fyrUxrpkNjxH5oSb49dS7Rfk+7CWIo2gqojXOL8s9xSwDoxq+npACrZokCS2ovl6fOzW 8l09mW/ccEk4oAdZg2MnlEm5WTsX7ut19ZEDWM18wpQZ6t2WzsLVdUB3PkYURffuJxRY g+aUVAE5+Yw98jqRcnug+ZTFUE+1aDOhSwxOp+NtIoyMuMlVf2eoWotUhma2O9uNKFW9 +UAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681493372; x=1684085372; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=B/zMNPvsJ76QjkM1FVDxX2x1kCiWFrXUv8piHTPNmKE=; b=jl1d9muijtDChmqSqCroRouls7VWAvl7zbwAtB3c9Kj5Jz56ZMoTY+RqH1Tamg+vOv lyR67/hMVkAxCAEdZwXlhyF3KB6Zu0x5b9tITFWg58aZ6j+Fgu3cdPsSgGgKSBXe4dNY UQBvSeP1X2+1bY7v28MYomtZc++R4xPEoV4BaOj+IUbYktrlcnsF2dWyDJeN/br9MuHH liX5BzGMwtNEY1Ql2sn6VHz7IZTkRGSpenvv/s4sV3LrVQqzhlSmNLrjtoHhKhrr04lu 9hSsEEYv7Y+98LNA4Y7cDAD9jUs3e/2IwNEMT7v1Ci0dWdKequC21nhcsB+ERfP+Dzi4 dZDw== X-Gm-Message-State: AAQBX9dwwmYRefk4s180/+ZlXAnDDwUzcfhrIdEkBFHY3fqd76wWAVmd BAmBAyCXmqLvbt1xeL5g3Q0GBHBMo5Dj X-Google-Smtp-Source: AKy350Z4dUZKdVtn0ngicjL45vSsU+MnN2dqL/cSY7ktGGKCk03DL47Ioacf0/CM1+w6Tkn33IoibTV1wujW X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a02:85ed:0:b0:40b:bd17:3c31 with SMTP id d100-20020a0285ed000000b0040bbd173c31mr2482742jai.0.1681493372447; Fri, 14 Apr 2023 10:29:32 -0700 (PDT) Date: Fri, 14 Apr 2023 17:29:22 +0000 In-Reply-To: <20230414172922.812640-1-rananta@google.com> Mime-Version: 1.0 References: <20230414172922.812640-1-rananta@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230414172922.812640-8-rananta@google.com> Subject: [PATCH v3 7/7] KVM: arm64: Use TLBI range-based intructions for unmap From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Ricardo Koller , Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The current implementation of the stage-2 unmap walker traverses the given range and, as a part of break-before-make, performs TLB invalidations with a DSB for every PTE. A multitude of this combination could cause a performance bottleneck. Hence, if the system supports FEAT_TLBIRANGE, defer the TLB invalidations until the entire walk is finished, and then use range-based instructions to invalidate the TLBs in one go. Condition this upon S2FWB in order to avoid walking the page-table again to perform the CMOs after issuing the TLBI. Signed-off-by: Raghavendra Rao Ananta Suggested-by: Oliver Upton --- arch/arm64/kvm/hyp/pgtable.c | 33 +++++++++++++++++++++++++++++---- 1 file changed, 29 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 3f136e35feb5e..bcb748e3566c7 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -987,10 +987,16 @@ int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *= pgt, u64 addr, u64 size, return ret; } =20 +struct stage2_unmap_data { + struct kvm_pgtable *pgt; + bool skip_pte_tlbis; +}; + static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx, enum kvm_pgtable_walk_flags visit) { - struct kvm_pgtable *pgt =3D ctx->arg; + struct stage2_unmap_data *unmap_data =3D ctx->arg; + struct kvm_pgtable *pgt =3D unmap_data->pgt; struct kvm_s2_mmu *mmu =3D pgt->mmu; struct kvm_pgtable_mm_ops *mm_ops =3D ctx->mm_ops; kvm_pte_t *childp =3D NULL; @@ -1018,7 +1024,7 @@ static int stage2_unmap_walker(const struct kvm_pgtab= le_visit_ctx *ctx, * block entry and rely on the remaining portions being faulted * back lazily. */ - stage2_put_pte(ctx, mmu, mm_ops, false); + stage2_put_pte(ctx, mmu, mm_ops, unmap_data->skip_pte_tlbis); =20 if (need_flush && mm_ops->dcache_clean_inval_poc) mm_ops->dcache_clean_inval_poc(kvm_pte_follow(ctx->old, mm_ops), @@ -1032,13 +1038,32 @@ static int stage2_unmap_walker(const struct kvm_pgt= able_visit_ctx *ctx, =20 int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) { + int ret; + struct stage2_unmap_data unmap_data =3D { + .pgt =3D pgt, + /* + * If FEAT_TLBIRANGE is implemented, defer the individial PTE + * TLB invalidations until the entire walk is finished, and + * then use the range-based TLBI instructions to do the + * invalidations. Condition this upon S2FWB in order to avoid + * a page-table walk again to perform the CMOs after TLBI. + */ + .skip_pte_tlbis =3D system_supports_tlb_range() && + stage2_has_fwb(pgt), + }; struct kvm_pgtable_walker walker =3D { .cb =3D stage2_unmap_walker, - .arg =3D pgt, + .arg =3D &unmap_data, .flags =3D KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, }; =20 - return kvm_pgtable_walk(pgt, addr, size, &walker); + ret =3D kvm_pgtable_walk(pgt, addr, size, &walker); + if (unmap_data.skip_pte_tlbis) + /* Perform the deferred TLB invalidations */ + kvm_call_hyp(__kvm_tlb_flush_vmid_range, pgt->mmu, + addr, addr + size); + + return ret; } =20 struct stage2_attr_data { --=20 2.40.0.634.g4ca3ef3211-goog