From nobody Sat Sep 13 00:05:26 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0B73C05027 for ; Mon, 6 Feb 2023 17:24:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230234AbjBFRYI (ORCPT ); Mon, 6 Feb 2023 12:24:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57372 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229680AbjBFRYC (ORCPT ); Mon, 6 Feb 2023 12:24:02 -0500 Received: from mail-il1-x149.google.com (mail-il1-x149.google.com [IPv6:2607:f8b0:4864:20::149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F3CDE10259 for ; Mon, 6 Feb 2023 09:23:48 -0800 (PST) Received: by mail-il1-x149.google.com with SMTP id g1-20020a92cda1000000b0030c45d93884so8616074ild.16 for ; Mon, 06 Feb 2023 09:23:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=I0XHQF3wFWbgtteCR59qm3StUISJj2Yko9/2XqErnNg=; b=btBNnRAc7wt8tsxJP6uN/M06/M4SZpYblnw+qwfYZIv4gmJghOeEjTTGvl1OYIjkLo L0Vv/dmjv0BXJhLWNtoBkowsRGyLx4mp0h236DEdiO5EXrxLJpYaTZNQuhTBbegOowqw sb07mVRDbB4SVcO7OS1hJXB0TGRelTjis2Dg5znefWhQ4ZND6zFSPIrxYyYlH1rTgFCR MLOL/lmwOkKjSfqRHATn0fVd4ZoN43BY0xulwCjz5tbZ6FMq+oEQd2YQ8f8PAoOPJXgb ATmp1ZNZ+umx3GhBRv1/EW67B4E17tXTOnwa0XbznyawTlp2k+lbjt04lK/yQfLz81le SJYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=I0XHQF3wFWbgtteCR59qm3StUISJj2Yko9/2XqErnNg=; b=uZ/Mmd17V6lD8vhzbNhxyTR+aAvd+4jwYTpqMq5BRLqk8TL0aSM39MCNGxAjDc29tA HV3P5aLVLZQ3iOnQVEY8XAsN29qzw8oMxzdUtdVGzYPL1DnkkUMNug7N31KAAncmBZyD p6cHyKAkuc7ovR3+Y09GS6++Owiw+CKhjSfchbCel2c4Ji2Rs0AlaXgdPNnm+5nVJmb8 wW7CN01d9uGgTpCnjKlPW0r60/KwfLp3Ib2wfLESOnQKvJLPGvL7l8FUDYiS1DRF7vbU hgiJXRfq64xmrdAZmqsgEYYnu+LNFw2BMGvoypCpImuJ4nR8XSv+FqKTcXSW7r2sS0QD y2iQ== X-Gm-Message-State: AO0yUKUSk7QidKrX+RJNs/XLDUQ48KtN2GlIaKarBU+HMHKj/Cgd6bE5 7XBHsG2PDD5MLm+yaKw0grymFuzA+Dqf X-Google-Smtp-Source: AK7set8m5os9pgRD9ZWypJHZlTWrqpUxhtGI32tM5bufh7Y7K68C5uAYXAjJ3KtZhwMvtEjkQXo+9+T6t238 X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6638:61b:b0:3a1:336:ad86 with SMTP id g27-20020a056638061b00b003a10336ad86mr9641jar.119.1675704228432; Mon, 06 Feb 2023 09:23:48 -0800 (PST) Date: Mon, 6 Feb 2023 17:23:34 +0000 In-Reply-To: <20230206172340.2639971-1-rananta@google.com> Mime-Version: 1.0 References: <20230206172340.2639971-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230206172340.2639971-2-rananta@google.com> Subject: [PATCH v2 1/7] arm64: tlb: Refactor the core flush algorithm of __flush_tlb_range From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , Ricardo Koller , Reiji Watanabe , James Morse , Alexandru Elisei , Suzuki K Poulose , Will Deacon Cc: Paolo Bonzini , Catalin Marinas , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently, the core TLB flush functionality of __flush_tlb_range() hardcodes vae1is (and variants) for the flush operation. In the upcoming patches, the KVM code reuses this core algorithm with ipas2e1is for range based TLB invalidations based on the IPA. Hence, extract the core flush functionality of __flush_tlb_range() into its own macro that accepts an 'op' argument to pass any TLBI operation, such that other callers (KVM) can benefit. No functional changes intended. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/include/asm/tlbflush.h | 107 +++++++++++++++--------------- 1 file changed, 54 insertions(+), 53 deletions(-) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlb= flush.h index 412a3b9a3c25d..9a57eae14e576 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -278,14 +278,60 @@ static inline void flush_tlb_page(struct vm_area_stru= ct *vma, */ #define MAX_TLBI_OPS PTRS_PER_PTE =20 +/* When the CPU does not support TLB range operations, flush the TLB + * entries one by one at the granularity of 'stride'. If the TLB + * range ops are supported, then: + * + * 1. If 'pages' is odd, flush the first page through non-range + * operations; + * + * 2. For remaining pages: the minimum range granularity is decided + * by 'scale', so multiple range TLBI operations may be required. + * Start from scale =3D 0, flush the corresponding number of pages + * ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it + * until no pages left. + * + * Note that certain ranges can be represented by either num =3D 31 and + * scale or num =3D 0 and scale + 1. The loop below favours the latter + * since num is limited to 30 by the __TLBI_RANGE_NUM() macro. + */ +#define __flush_tlb_range_op(op, start, pages, stride, asid, tlb_level, tl= bi_user) do { \ + int num =3D 0; \ + int scale =3D 0; \ + unsigned long addr; \ + \ + while (pages > 0) { \ + if (!system_supports_tlb_range() || \ + pages % 2 =3D=3D 1) { \ + addr =3D __TLBI_VADDR(start, asid); \ + __tlbi_level(op, addr, tlb_level); \ + if (tlbi_user) \ + __tlbi_user_level(op, addr, tlb_level); \ + start +=3D stride; \ + pages -=3D stride >> PAGE_SHIFT; \ + continue; \ + } \ + \ + num =3D __TLBI_RANGE_NUM(pages, scale); \ + if (num >=3D 0) { \ + addr =3D __TLBI_VADDR_RANGE(start, asid, scale, \ + num, tlb_level); \ + __tlbi(r##op, addr); \ + if (tlbi_user) \ + __tlbi_user(r##op, addr); \ + start +=3D __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \ + pages -=3D __TLBI_RANGE_PAGES(num, scale); \ + } \ + scale++; \ + } \ +} while (0) + static inline void __flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end, unsigned long stride, bool last_level, int tlb_level) { - int num =3D 0; - int scale =3D 0; - unsigned long asid, addr, pages; + unsigned long asid, pages; =20 start =3D round_down(start, stride); end =3D round_up(end, stride); @@ -307,56 +353,11 @@ static inline void __flush_tlb_range(struct vm_area_s= truct *vma, dsb(ishst); asid =3D ASID(vma->vm_mm); =20 - /* - * When the CPU does not support TLB range operations, flush the TLB - * entries one by one at the granularity of 'stride'. If the TLB - * range ops are supported, then: - * - * 1. If 'pages' is odd, flush the first page through non-range - * operations; - * - * 2. For remaining pages: the minimum range granularity is decided - * by 'scale', so multiple range TLBI operations may be required. - * Start from scale =3D 0, flush the corresponding number of pages - * ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it - * until no pages left. - * - * Note that certain ranges can be represented by either num =3D 31 and - * scale or num =3D 0 and scale + 1. The loop below favours the latter - * since num is limited to 30 by the __TLBI_RANGE_NUM() macro. - */ - while (pages > 0) { - if (!system_supports_tlb_range() || - pages % 2 =3D=3D 1) { - addr =3D __TLBI_VADDR(start, asid); - if (last_level) { - __tlbi_level(vale1is, addr, tlb_level); - __tlbi_user_level(vale1is, addr, tlb_level); - } else { - __tlbi_level(vae1is, addr, tlb_level); - __tlbi_user_level(vae1is, addr, tlb_level); - } - start +=3D stride; - pages -=3D stride >> PAGE_SHIFT; - continue; - } - - num =3D __TLBI_RANGE_NUM(pages, scale); - if (num >=3D 0) { - addr =3D __TLBI_VADDR_RANGE(start, asid, scale, - num, tlb_level); - if (last_level) { - __tlbi(rvale1is, addr); - __tlbi_user(rvale1is, addr); - } else { - __tlbi(rvae1is, addr); - __tlbi_user(rvae1is, addr); - } - start +=3D __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; - pages -=3D __TLBI_RANGE_PAGES(num, scale); - } - scale++; - } + if (last_level) + __flush_tlb_range_op(vale1is, start, pages, stride, asid, tlb_level, tru= e); + else + __flush_tlb_range_op(vae1is, start, pages, stride, asid, tlb_level, true= ); + dsb(ish); } =20 --=20 2.39.1.519.gcb327c4b5f-goog From nobody Sat Sep 13 00:05:26 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AEE0EC05027 for ; Mon, 6 Feb 2023 17:24:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230248AbjBFRYL (ORCPT ); Mon, 6 Feb 2023 12:24:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57392 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229792AbjBFRYD (ORCPT ); Mon, 6 Feb 2023 12:24:03 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 155082A16F for ; Mon, 6 Feb 2023 09:23:50 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id u8-20020a258408000000b00880a7cc9684so5991684ybk.13 for ; Mon, 06 Feb 2023 09:23:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=BkWHRJ3DaV8HyItO3IxH/uOsr0AuzsEy+IWd0D6gRm0=; b=gDebpZOlByZMcipJS7iiuEaEoz2LNOZhbfjLuHHnTzbhZvH4gPrm4N5h+hXmgtUVft sBsxc7NH3DIOlVtD10BAJGlhmkwzLmogWYFoJs75t0sGYS78+904yzwoYUpRcqYIdZOC Izj40XAX4urHpYj2KXU9Uborl412/oBiHYOR+vPy/sQ+dwLzU2UsJo3sVKpTKOGZtBdr XYxW4tjeM7GYcDvx8ygtuvIjH4aCXWwWKx7PizQkoTYhJk5w3afxDp5BHjrSL9uL7X3a sPqadJaV7P2oGojYZ5LLqsjeGRZroWjZdrQRfrgtR2lF1mbNkF0I+xCdw2/zFn/VXQmr KDjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BkWHRJ3DaV8HyItO3IxH/uOsr0AuzsEy+IWd0D6gRm0=; b=ioLqSNTi3utyGG5KByyddvyhnaoY/G2suMZO6ykWBj0FrHaopERC1Ut26sAggG+38X 5DWF70KNPIInn2IgNvdkzWpIvBmKhw9Bhuq9Muz0drvoWHosjwaKdkhR7MLi4a4ukoSr hlaiJdtz/92WdDAg9vaMFhVYmKk+4jmW/xIrRP5cf97hTpftI1idK642l1u8DMUGPaMX gnuXqPUk+Lq6Mj10hSS80+e+DaoP7t8XvQ77OiNa+igspfFaidyAzpMaEJM1etW+7OGw oi6Km85MYB/ZcrPQBtkGVAKnS1hDsaEPa5RXv+cfcezSZGKzZlgNhzrdvyVe3EbAoe/v XL4g== X-Gm-Message-State: AO0yUKXOT+92OvmTDlSP8pjX2+8k0GkYxdN4EMdhwMFgMXiqiF+/cDW9 L+8aCXHe6MOkRAU4c9eHUGGXyeNTN5wv X-Google-Smtp-Source: AK7set+MnrswiwWWU7VjVXEwJFGUTZwq2+FDuhGHpgxo1swkM3BdaMOiSD9i23X6Wo2T0up/ZIHq2NMFv4Er X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a81:4e95:0:b0:527:b484:aa14 with SMTP id c143-20020a814e95000000b00527b484aa14mr602256ywb.263.1675704229352; Mon, 06 Feb 2023 09:23:49 -0800 (PST) Date: Mon, 6 Feb 2023 17:23:35 +0000 In-Reply-To: <20230206172340.2639971-1-rananta@google.com> Mime-Version: 1.0 References: <20230206172340.2639971-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230206172340.2639971-3-rananta@google.com> Subject: [PATCH v2 2/7] KVM: arm64: Add FEAT_TLBIRANGE support From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , Ricardo Koller , Reiji Watanabe , James Morse , Alexandru Elisei , Suzuki K Poulose , Will Deacon Cc: Paolo Bonzini , Catalin Marinas , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Define a generic function __kvm_tlb_flush_range() to invalidate the TLBs over a range of addresses. The implementation accepts 'op' as a generic TLBI operation. Upcoming patches will use this to implement IPA based TLB invalidations (ipas2e1is). If the system doesn't support FEAT_TLBIRANGE, the implementation falls back to flushing the pages one by one for the range supplied. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/include/asm/kvm_asm.h | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_= asm.h index 43c3bc0f9544d..995ff048e8851 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -221,6 +221,24 @@ DECLARE_KVM_NVHE_SYM(__per_cpu_end); DECLARE_KVM_HYP_SYM(__bp_harden_hyp_vecs); #define __bp_harden_hyp_vecs CHOOSE_HYP_SYM(__bp_harden_hyp_vecs) =20 +#define __kvm_tlb_flush_range(op, mmu, start, end, level, tlb_level) do { \ + unsigned long pages, stride; \ + \ + stride =3D kvm_granule_size(level); \ + start =3D round_down(start, stride); \ + end =3D round_up(end, stride); \ + pages =3D (end - start) >> PAGE_SHIFT; \ + \ + if ((!system_supports_tlb_range() && \ + (end - start) >=3D (MAX_TLBI_OPS * stride)) || \ + pages >=3D MAX_TLBI_RANGE_PAGES) { \ + __kvm_tlb_flush_vmid(mmu); \ + break; \ + } \ + \ + __flush_tlb_range_op(op, start, pages, stride, 0, tlb_level, false); \ +} while (0) + extern void __kvm_flush_vm_context(void); extern void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu); extern void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t i= pa, --=20 2.39.1.519.gcb327c4b5f-goog From nobody Sat Sep 13 00:05:26 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2263BC636D3 for ; Mon, 6 Feb 2023 17:24:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230220AbjBFRYS (ORCPT ); Mon, 6 Feb 2023 12:24:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57386 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229632AbjBFRYE (ORCPT ); Mon, 6 Feb 2023 12:24:04 -0500 Received: from mail-il1-x149.google.com (mail-il1-x149.google.com [IPv6:2607:f8b0:4864:20::149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F10E82B61C for ; Mon, 6 Feb 2023 09:23:50 -0800 (PST) Received: by mail-il1-x149.google.com with SMTP id h2-20020a92c262000000b00313b8b647ceso3740589ild.15 for ; Mon, 06 Feb 2023 09:23:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=HShlHRYMfKfRQSfEY5d9PUEY0vusoRIVcg8R7I5JtSo=; b=KFxpvgYxyiHxlMf2kQXY8O7Vohrk2xKMdNeZHDLHgtSxKzi5XFednKMFazo7zN0g78 pHamNJHi0NB0xPRWtg7XSuph++snmq8X/mXf+v8P19DeJTrxJx7n1xOAUc8/lDYK0QLo xiqV/MtYIXgl+7OFWheeL9dxdw54qLmGzVDOWAtMt0M/Ne8BolgoCEvOynt1cVUyK9RY OtkeMk/n1bmTUHXHtRKHl05QdrYycuHcPG1X149XVXiiThRv86Ac3hE0O1CXCoXo8i7Z qK6M4Da9UnpHsAK2d4hSQ1Wo5O6cqG/hV8I8eA2nNI0pZag0tnKnjald5fDghJch3EC9 IQcw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HShlHRYMfKfRQSfEY5d9PUEY0vusoRIVcg8R7I5JtSo=; b=dtte5E+3efg7HPI+BAQlx8+ltJrpanmwFT+qldniyIDxwRddO2ZnOOW2TZmF4lIZk8 vweHNNtN4gKappTrqU1n23mEMh6oIG1enTYcZAxBGDCgrvlP2X/lGpMttmTiXmcZHZ3U iwrW2zlR0bLqVOoBTLN7mF0cfl7Pc+6IVwahj3y6tALPMxRx2Z3ObzUYSqG//smzD/qp 7Xny5TX3ygV05EBdBrvToHWTefgNhiDrBWy4JXhU34zMxpPeYY19GpeccpuvBKQBftmP 9rFteGzdYmTOPdG83GzpVebCAh/ZzQ028Ixth7aTFsRgnPwO9Xd0UiFNQVExfTzmmcjX t/DQ== X-Gm-Message-State: AO0yUKUPHgYJ33mtoBG+FTaVOycmaEbc73OnNpSPQZ+GwY4PtiSGnG+j QGaDBkH7X/OtC3Rhxk5U4YC2tE8W4TBn X-Google-Smtp-Source: AK7set/9GAC28caZmQydbqalQDK26Zt8xTg53hi5Ypa/Bv48w9Y1qLEuLIHz5iY/6yq1c0y2Fxyz72FHxLL2 X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6602:714:b0:72c:d79b:bd3d with SMTP id f20-20020a056602071400b0072cd79bbd3dmr1888748iox.49.1675704230413; Mon, 06 Feb 2023 09:23:50 -0800 (PST) Date: Mon, 6 Feb 2023 17:23:36 +0000 In-Reply-To: <20230206172340.2639971-1-rananta@google.com> Mime-Version: 1.0 References: <20230206172340.2639971-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230206172340.2639971-4-rananta@google.com> Subject: [PATCH v2 3/7] KVM: arm64: Implement __kvm_tlb_flush_range_vmid_ipa() From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , Ricardo Koller , Reiji Watanabe , James Morse , Alexandru Elisei , Suzuki K Poulose , Will Deacon Cc: Paolo Bonzini , Catalin Marinas , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Define __kvm_tlb_flush_range_vmid_ipa() (for VHE and nVHE) to flush a range of stage-2 page-tables using IPA in one go. If the system supports FEAT_TLBIRANGE, the following patches would conviniently replace global TLBI such as vmalls12e1is in the map, unmap, and dirty-logging paths with ripas2e1is instead. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/include/asm/kvm_asm.h | 3 +++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 12 ++++++++++++ arch/arm64/kvm/hyp/nvhe/tlb.c | 28 ++++++++++++++++++++++++++++ arch/arm64/kvm/hyp/vhe/tlb.c | 24 ++++++++++++++++++++++++ 4 files changed, 67 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_= asm.h index 995ff048e8851..80a8ea85e84f8 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -79,6 +79,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_init_vm, __KVM_HOST_SMCCC_FUNC___pkvm_init_vcpu, __KVM_HOST_SMCCC_FUNC___pkvm_teardown_vm, + __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_range_vmid_ipa, }; =20 #define DECLARE_KVM_VHE_SYM(sym) extern char sym[] @@ -243,6 +244,8 @@ extern void __kvm_flush_vm_context(void); extern void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu); extern void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t i= pa, int level); +extern void __kvm_tlb_flush_range_vmid_ipa(struct kvm_s2_mmu *mmu, phys_ad= dr_t start, + phys_addr_t end, int level, int tlb_level); extern void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu); =20 extern void __kvm_timer_set_cntvoff(u64 cntvoff); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 728e01d4536b0..5787eee4c9fe4 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -125,6 +125,17 @@ static void handle___kvm_tlb_flush_vmid_ipa(struct kvm= _cpu_context *host_ctxt) __kvm_tlb_flush_vmid_ipa(kern_hyp_va(mmu), ipa, level); } =20 +static void handle___kvm_tlb_flush_range_vmid_ipa(struct kvm_cpu_context *= host_ctxt) +{ + DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); + DECLARE_REG(phys_addr_t, start, host_ctxt, 2); + DECLARE_REG(phys_addr_t, end, host_ctxt, 3); + DECLARE_REG(int, level, host_ctxt, 4); + DECLARE_REG(int, tlb_level, host_ctxt, 5); + + __kvm_tlb_flush_range_vmid_ipa(kern_hyp_va(mmu), start, end, level, tlb_l= evel); +} + static void handle___kvm_tlb_flush_vmid(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); @@ -315,6 +326,7 @@ static const hcall_t host_hcall[] =3D { HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa), + HANDLE_FUNC(__kvm_tlb_flush_range_vmid_ipa), HANDLE_FUNC(__kvm_tlb_flush_vmid), HANDLE_FUNC(__kvm_flush_cpu_context), HANDLE_FUNC(__kvm_timer_set_cntvoff), diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c index d296d617f5896..7398dd00445e7 100644 --- a/arch/arm64/kvm/hyp/nvhe/tlb.c +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c @@ -109,6 +109,34 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, __tlb_switch_to_host(&cxt); } =20 +void __kvm_tlb_flush_range_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t st= art, + phys_addr_t end, int level, int tlb_level) +{ + struct tlb_inv_context cxt; + + dsb(ishst); + + /* Switch to requested VMID */ + __tlb_switch_to_guest(mmu, &cxt); + + __kvm_tlb_flush_range(ipas2e1is, mmu, start, end, level, tlb_level); + + /* + * Range-based ipas2e1is flushes only Stage-2 entries, and since the + * VA isn't available for Stage-1 entries, flush the entire stage-1. + */ + dsb(ish); + __tlbi(vmalle1is); + dsb(ish); + isb(); + + /* See the comment below in __kvm_tlb_flush_vmid_ipa() */ + if (icache_is_vpipt()) + icache_inval_all_pou(); + + __tlb_switch_to_host(&cxt); +} + void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) { struct tlb_inv_context cxt; diff --git a/arch/arm64/kvm/hyp/vhe/tlb.c b/arch/arm64/kvm/hyp/vhe/tlb.c index 24cef9b87f9e9..e9c1d69f7ddf7 100644 --- a/arch/arm64/kvm/hyp/vhe/tlb.c +++ b/arch/arm64/kvm/hyp/vhe/tlb.c @@ -111,6 +111,30 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, __tlb_switch_to_host(&cxt); } =20 +void __kvm_tlb_flush_range_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t st= art, + phys_addr_t end, int level, int tlb_level) +{ + struct tlb_inv_context cxt; + + dsb(ishst); + + /* Switch to requested VMID */ + __tlb_switch_to_guest(mmu, &cxt); + + __kvm_tlb_flush_range(ipas2e1is, mmu, start, end, level, tlb_level); + + /* + * Range-based ipas2e1is flushes only Stage-2 entries, and since the + * VA isn't available for Stage-1 entries, flush the entire stage-1. + */ + dsb(ish); + __tlbi(vmalle1is); + dsb(ish); + isb(); + + __tlb_switch_to_host(&cxt); +} + void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) { struct tlb_inv_context cxt; --=20 2.39.1.519.gcb327c4b5f-goog From nobody Sat Sep 13 00:05:26 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 899D0C05027 for ; Mon, 6 Feb 2023 17:24:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230280AbjBFRYP (ORCPT ); Mon, 6 Feb 2023 12:24:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57392 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229921AbjBFRYE (ORCPT ); Mon, 6 Feb 2023 12:24:04 -0500 Received: from mail-oi1-x24a.google.com (mail-oi1-x24a.google.com [IPv6:2607:f8b0:4864:20::24a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 29C862B639 for ; Mon, 6 Feb 2023 09:23:52 -0800 (PST) Received: by mail-oi1-x24a.google.com with SMTP id s30-20020a056808209e00b003786ebea977so1767682oiw.10 for ; Mon, 06 Feb 2023 09:23:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=/WQ4tanGHLD0a6b2FjQ+bGKlMDGPgiC81Lwl9+tXSgw=; b=HlgZK7drAF1NkubuaYDgOFLWzGKsIDZuhRNRe/ggGLvHzyeMYq0ufuk/xC6nCR4GiB tKo0COOrCG5vOobMn8v1RmXXnAquTm82KSrqpuApRFowqKsxJY2sz8w92mzEpzZzndi4 VnHxdsSemETwYgEEzspJDMOQ1zmfKc+noNBCE6o5ezFaVtwASt64WNKj7ZLY7LI0DVnS 2LShvJ1A7kZo2YlD9IKgLs6SxJRdD9l4V1x0oA+YM1Qu+bERGrjFAW7OJ4j7RIbZZOfO yxVOazXmx9ju+W3AKI2Tkb6yABPPk2cA8DLoquRA7XhVUbQHU1z7f9s/iBzkTLRZ65Vq 4G/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/WQ4tanGHLD0a6b2FjQ+bGKlMDGPgiC81Lwl9+tXSgw=; b=M99EjMnSmMhkURv2cLzxDWQLfM7rz7hqckRwHbp+Row7D3nH2wk62AAx8xUCH3xnrE QlKFjnK9uWgAwqZkDH4eE0uMTl+ODOaZ+nZUZru9qmrg+wbcL3XQLRyG8BCy6YLOCIZq 3RU9OuBPcIVzSIhoCoNTAIfC5EaHobyFrIwiqsSmw7TnfSbviMFNeAVbrz95c/UiLWmD 3Frp2+RIK9RDCN9aiGUvg8SvmIwdkeJmWgrNuQR9armsMRPXtU1geBAvVBBM2IKLBRa0 0HWx2CZRHXRVTwzrPNBd+lb15CWwQBytSzx5PzLPOsEyGbsu9ptIeEy545a1yIA4neRs /ryg== X-Gm-Message-State: AO0yUKUJ+Req0AHJLr6LVztOoqevERP5ATO8JqNzjmTD7xe9cWdUqpB+ 6fvYevv1ylNexkNQKeVj13MT6pNlfe+8 X-Google-Smtp-Source: AK7set8ffZWDVhuBcprr2QYwvUU/KBzrwfS8aKnRkn+PHHlIMUjSCY4ut4uQ2GZlkBZjiAH8CTJ0rI5PuQsJ X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6870:a90f:b0:163:d167:809d with SMTP id eq15-20020a056870a90f00b00163d167809dmr1742923oab.8.1675704231413; Mon, 06 Feb 2023 09:23:51 -0800 (PST) Date: Mon, 6 Feb 2023 17:23:37 +0000 In-Reply-To: <20230206172340.2639971-1-rananta@google.com> Mime-Version: 1.0 References: <20230206172340.2639971-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230206172340.2639971-5-rananta@google.com> Subject: [PATCH v2 4/7] KVM: arm64: Implement kvm_arch_flush_remote_tlbs_range() From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , Ricardo Koller , Reiji Watanabe , James Morse , Alexandru Elisei , Suzuki K Poulose , Will Deacon Cc: Paolo Bonzini , Catalin Marinas , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Implement kvm_arch_flush_remote_tlbs_range() for arm64, such that it can utilize the TLBI range based instructions if supported. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/include/asm/kvm_host.h | 3 +++ arch/arm64/kvm/mmu.c | 15 +++++++++++++++ 2 files changed, 18 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index dee530d75b957..211fab0c1de74 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1002,6 +1002,9 @@ struct kvm *kvm_arch_alloc_vm(void); #define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS int kvm_arch_flush_remote_tlbs(struct kvm *kvm); =20 +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64= pages); + static inline bool kvm_vm_is_protected(struct kvm *kvm) { return false; diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index e98910a8d0af6..409cb187f4911 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -91,6 +91,21 @@ int kvm_arch_flush_remote_tlbs(struct kvm *kvm) return 0; } =20 +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64= pages) +{ + phys_addr_t start, end; + + if (!system_supports_tlb_range()) + return -EOPNOTSUPP; + + start =3D start_gfn << PAGE_SHIFT; + end =3D (start_gfn + pages) << PAGE_SHIFT; + + kvm_call_hyp(__kvm_tlb_flush_range_vmid_ipa, &kvm->arch.mmu, + start, end, KVM_PGTABLE_MAX_LEVELS - 1, 0); + return 0; +} + static bool kvm_is_device_pfn(unsigned long pfn) { return !pfn_is_map_memory(pfn); --=20 2.39.1.519.gcb327c4b5f-goog From nobody Sat Sep 13 00:05:26 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53DEBC636D4 for ; Mon, 6 Feb 2023 17:24:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230235AbjBFRYT (ORCPT ); Mon, 6 Feb 2023 12:24:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57396 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229954AbjBFRYE (ORCPT ); Mon, 6 Feb 2023 12:24:04 -0500 Received: from mail-il1-x14a.google.com (mail-il1-x14a.google.com [IPv6:2607:f8b0:4864:20::14a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3EB926E86 for ; Mon, 6 Feb 2023 09:23:53 -0800 (PST) Received: by mail-il1-x14a.google.com with SMTP id o10-20020a056e02102a00b003006328df7bso8584047ilj.17 for ; Mon, 06 Feb 2023 09:23:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=vuXcpW5hhZycWwjy1Q114Q+HT3FS23f7QbM5HtQY0Fg=; b=feKShA/7o9prxIHFhpNnY77ILQUPvgtpZXRk79I3iz0pxZR4XUQSXtxjBLVH+THqDa vcSpwZXazMbkumYlUMJRFzZWGBKqZxy7sElr6nYZ7v3kuQZaK1Z1KxDj01LfMeyT7mH3 6w5A5L2SljnoZgGQsrKJrYCohxcM9xKdu/qZvXTVBpE+tK6GENFkgHYRaOgTsMix1n7e uJzgmgooIUCuCP9t9WrcP10nQ2l9L6DGY3FMMbeO+6zNOmAis/kijSg7NFTZvj06P9+8 3BYpx3rzb2M0cUf0XIK3hGqrmehOJ8k5To3ZsQgGGxqTYW0tVAT0h8grc9edvmNMe+Yw uq3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=vuXcpW5hhZycWwjy1Q114Q+HT3FS23f7QbM5HtQY0Fg=; b=P1TnViGoE5XyU3OUtzPPsxwCZ7Sb7Fk/zC3FRRV3b3UaUM9/Png9co3FNevgaDAl4C zn1evX/WmQcVMSVsZQsDerVFyDYsJfKl+t+h95vuq/LOWKLnIrJzEF9KVK70wEFcYFtb erYsgdiie8EDZyuzRaaKwlpVEGlGdAbp0wsxD6xIaOXVLEmfIqbf/GbUta00H5tt+rHy oanHiSbOljB1Dunut1UVZzxeYiSBy6pFQbV9agbiqLPGM2uva2xVIVSDDhZStkvJP5FM caOqsDJXMcGjpUdYqbOJR/NoEh3POGGBmMoCl3jNWbMzooqeaCi/cNBc5R1TMB+mzRaH ag5g== X-Gm-Message-State: AO0yUKXkLf4BBiQdpoTDT9VRfMf3IccpBcGmIn2v3Brj88fBv1ehBCYI K8N/O1lS8/FcKrRqUEza/jDWHrP/oA6K X-Google-Smtp-Source: AK7set9A/bA5zzEfudk7dWL6RltnEMvURxG1hIJY97EY9SyVcwzO2bKsvygC60t64GkBOI+dM2FgiOiM83Q+ X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6602:398e:b0:704:dad2:863d with SMTP id bw14-20020a056602398e00b00704dad2863dmr3973430iob.60.1675704232681; Mon, 06 Feb 2023 09:23:52 -0800 (PST) Date: Mon, 6 Feb 2023 17:23:38 +0000 In-Reply-To: <20230206172340.2639971-1-rananta@google.com> Mime-Version: 1.0 References: <20230206172340.2639971-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230206172340.2639971-6-rananta@google.com> Subject: [PATCH v2 5/7] KVM: arm64: Flush only the memslot after write-protect From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , Ricardo Koller , Reiji Watanabe , James Morse , Alexandru Elisei , Suzuki K Poulose , Will Deacon Cc: Paolo Bonzini , Catalin Marinas , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" After write-protecting the region, currently KVM invalidates the entire TLB entries using kvm_flush_remote_tlbs(). Instead, scope the invalidation only to the targeted memslot. If supported, the architecture would use the range-based TLBI instructions to flush the memslot or else fallback to flushing all of the TLBs. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/kvm/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 409cb187f4911..3e33af0daf1d3 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -981,7 +981,7 @@ static void kvm_mmu_wp_memory_region(struct kvm *kvm, i= nt slot) write_lock(&kvm->mmu_lock); stage2_wp_range(&kvm->arch.mmu, start, end); write_unlock(&kvm->mmu_lock); - kvm_flush_remote_tlbs(kvm); + kvm_flush_remote_tlbs_memslot(kvm, memslot); } =20 /** --=20 2.39.1.519.gcb327c4b5f-goog From nobody Sat Sep 13 00:05:26 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC7BDC636D3 for ; Mon, 6 Feb 2023 17:24:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230138AbjBFRYW (ORCPT ); Mon, 6 Feb 2023 12:24:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57420 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229986AbjBFRYF (ORCPT ); Mon, 6 Feb 2023 12:24:05 -0500 Received: from mail-il1-x149.google.com (mail-il1-x149.google.com [IPv6:2607:f8b0:4864:20::149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E8FB72BEC6 for ; Mon, 6 Feb 2023 09:23:53 -0800 (PST) Received: by mail-il1-x149.google.com with SMTP id c11-20020a056e020bcb00b0030be9d07d63so8468868ilu.0 for ; Mon, 06 Feb 2023 09:23:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=HD8UYRMgXgvHWUmdjVGh+ElciOrAtTAu4jFhw/x/rRQ=; b=nQ6CtqXZSzQGxB//GpjWfquAmvGTR5dWwlYFGiU/5px0GC3rJpOWnKhEsOUDQTahXm LPHYdrgMYTG/S8ZnZDRPwovCgJrDRFpIoyJEBvuXv3U7ZaE6HAEnMmjwkf2qSDmf6E75 iaVBiXApoWChsBVG+b6ZNyvbAfbeiKBN9u8AXLWxmyVllftR4WlPwwovRShcouDAwv/Y nO9eHoBL6tOog9VaS7KtVRKW1ZIZSspm4oiic72yxChlIsLTGUo7X8+UbVSEgwWiW77h nJHjFwiQJZCRAb1AA1Z5dHfFui2O2vsWrGPgJUAEEpRsoPLNefaDZ5xzuND0Tp33Os5W gTkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HD8UYRMgXgvHWUmdjVGh+ElciOrAtTAu4jFhw/x/rRQ=; b=F5Hg2Tjr1AY35KjxAs5XgRh+flf63AMMhELYnOvet2z5au+r4SOxnmk2FtWIXVLMya Zyn3uMmNb0OTW+yCu8IxoRH3WmA5ZDCbkpMW853Vvjgwj/+0gmU+c13C6oW9/ycoKqZC mbiZxobfk+kJVlly30UVy9FEb+cXwltBm6s8RpDQmpftcmg+Mw7XCfHyOU9YM/Bx8+ks TzSOQ1b9YWtrRF9vrs22z0C5DbMDzQUmX6lFr+YGYc38GNm+bVsTFL5JKqHEVw6o3+po qXbdBvpsArZFnTNY4675hm+fizs7WvupT8NBI9ZrqvmMjn71hmoe7CcjWU9fXbmVU9Eg A/ig== X-Gm-Message-State: AO0yUKW3tBsrVBS0HGdfuUL8vrrWmzVusWKYSH7tUu3Jknl9iW4J91tY QcwgAze3RDWTkuagzgNd14/msdN4zfSK X-Google-Smtp-Source: AK7set8+L6PzdEluzAA7EliIwZ6FnESXn+a+MIRyEkM0BR9u+V5qGwhqAUgQH+Hx7Flqnt0qmwXo/Xhbh14F X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6638:4c:b0:3ae:b0c1:72fe with SMTP id a12-20020a056638004c00b003aeb0c172femr51327jap.2.1675704233397; Mon, 06 Feb 2023 09:23:53 -0800 (PST) Date: Mon, 6 Feb 2023 17:23:39 +0000 In-Reply-To: <20230206172340.2639971-1-rananta@google.com> Mime-Version: 1.0 References: <20230206172340.2639971-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230206172340.2639971-7-rananta@google.com> Subject: [PATCH v2 6/7] KVM: arm64: Break the table entries using TLBI range instructions From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , Ricardo Koller , Reiji Watanabe , James Morse , Alexandru Elisei , Suzuki K Poulose , Will Deacon Cc: Paolo Bonzini , Catalin Marinas , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently, when breaking up the stage-2 table entries, KVM would flush the entire VM's context using 'vmalls12e1is' TLBI operation. One of the problematic situation is collapsing table entries into a hugepage, specifically if the VM is faulting on many hugepages (say after dirty-logging). This creates a performance penality for the guest whose pages have already been faulted earlier as they would have to refill their TLBs again. Hence, if the system supports it, use __kvm_tlb_flush_range_vmid_ipa() to flush only the range of pages governed by the table entry, while leaving other TLB entries alone. An upcoming patch also takes advantage of this when breaking up table entries during the unmap operation. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/kvm/hyp/pgtable.c | 23 ++++++++++++++++++++--- 1 file changed, 20 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index b11cf2c618a6c..0858d1fa85d6b 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -686,6 +686,20 @@ static bool stage2_try_set_pte(const struct kvm_pgtabl= e_visit_ctx *ctx, kvm_pte_ return cmpxchg(ctx->ptep, ctx->old, new) =3D=3D ctx->old; } =20 +static void kvm_pgtable_stage2_flush_range(struct kvm_s2_mmu *mmu, u64 sta= rt, u64 end, + u32 level, u32 tlb_level) +{ + if (system_supports_tlb_range()) + kvm_call_hyp(__kvm_tlb_flush_range_vmid_ipa, mmu, start, end, level, tlb= _level); + else + /* + * Invalidate the whole stage-2, as we may have numerous leaf + * entries below us which would otherwise need invalidating + * individually. + */ + kvm_call_hyp(__kvm_tlb_flush_vmid, mmu); +} + /** * stage2_try_break_pte() - Invalidates a pte according to the * 'break-before-make' requirements of the @@ -721,10 +735,13 @@ static bool stage2_try_break_pte(const struct kvm_pgt= able_visit_ctx *ctx, * Perform the appropriate TLB invalidation based on the evicted pte * value (if any). */ - if (kvm_pte_table(ctx->old, ctx->level)) - kvm_call_hyp(__kvm_tlb_flush_vmid, mmu); - else if (kvm_pte_valid(ctx->old)) + if (kvm_pte_table(ctx->old, ctx->level)) { + u64 end =3D ctx->addr + kvm_granule_size(ctx->level); + + kvm_pgtable_stage2_flush_range(mmu, ctx->addr, end, ctx->level, 0); + } else if (kvm_pte_valid(ctx->old)) { kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, ctx->addr, ctx->level); + } =20 if (stage2_pte_is_counted(ctx->old)) mm_ops->put_page(ctx->ptep); --=20 2.39.1.519.gcb327c4b5f-goog From nobody Sat Sep 13 00:05:26 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1E9FC636D4 for ; Mon, 6 Feb 2023 17:24:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230301AbjBFRYZ (ORCPT ); Mon, 6 Feb 2023 12:24:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57450 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230041AbjBFRYF (ORCPT ); Mon, 6 Feb 2023 12:24:05 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 61813270C for ; Mon, 6 Feb 2023 09:23:55 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id 200-20020a2505d1000000b0088347752c5fso5527704ybf.18 for ; Mon, 06 Feb 2023 09:23:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=xe2d5VCHL8knVkIvOOdANfJnEofFJGQpmqgs9ZySRtc=; b=sbrP00/Yk9tKJfGQc4FLq70ucmYmbgy5/lzyIAHSS74GMLHYKuVcEW7frbq41WNd7X O3J9Iifetr3zZa/9AgWSpTNIta3332QfCY58JHRG52qqRnCLTofCIe7Vvk+6CD5vy0tH q/pPLvD9In0+0piPhUnB5gPX5M8nCI1OkVYBDfXwFXyySn2nGpsn3y+Qxe9jDa2s576B tzcD+5DLiJSCgawG0L6rfscpHPOIDdvEB96UGjKAqynaNJju6eHOT/CFpjW8mVqdatcC BNHbAP6Fv21oQE4k9kPEXfVZmc3lrDgDAehHqXKLQ4X1k8i7N1i+CheT9X0b3tSlsNRX 3N9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xe2d5VCHL8knVkIvOOdANfJnEofFJGQpmqgs9ZySRtc=; b=zvpr6U59+AAxeRmxWpI84TvTdbwOJunH4C/Hp8LY8WUBdzZ2/srlAvWABb8TohXhMz Vl3BGaXuP2RuDqM+aeOAgHHLxOapfrqomkqprIGgO3VcqQJrvVSKRvE7fdcNeO54z1Iq 4Tx14p+IsCcRQ4wpu4PN9LDVTG/Sfi5GmSZ8DpuWDDxhryP1Og6GB34GuqBnDnOl9t7r no8zF45a38PEEmReR1gpdHrFd1o9wiAOKGyDUvVgJXOVP5pVl9/kGbCI0Lq4sjg8vEdb l1VOs1BswpC+wv7ANjxNsdnKnG4lOhxabeEXbegjsAXS4QW9KixOGsn4WAe7TKHEnkR7 uVFQ== X-Gm-Message-State: AO0yUKWxkRYUkWivx3lt2GEt2yiarG9DES5ovaSgligceo6JuSjcTKeH Cf0I/W4A4laK48xcFxTcdvgfd0wH87Tj X-Google-Smtp-Source: AK7set/7ERS8QZmp236U3VvlNKmAF4kQJ8IiQT3dBm5M8Gtg+WXeMrtq2ewrkogC8KUiVC864iRmNaPt/E6o X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:690c:29d:b0:521:db02:1011 with SMTP id bf29-20020a05690c029d00b00521db021011mr0ywb.1.1675704234298; Mon, 06 Feb 2023 09:23:54 -0800 (PST) Date: Mon, 6 Feb 2023 17:23:40 +0000 In-Reply-To: <20230206172340.2639971-1-rananta@google.com> Mime-Version: 1.0 References: <20230206172340.2639971-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230206172340.2639971-8-rananta@google.com> Subject: [PATCH v2 7/7] KVM: arm64: Create a fast stage-2 unmap path From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , Ricardo Koller , Reiji Watanabe , James Morse , Alexandru Elisei , Suzuki K Poulose , Will Deacon Cc: Paolo Bonzini , Catalin Marinas , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The current implementation of the stage-2 unmap walker traverses the entire page-table to clear and flush the TLBs for each entry. This could be very expensive, especially if the VM is not backed by hugepages. The unmap operation could be made efficient by disconnecting the table at the very top (level at which the largest block mapping can be hosted) and do the rest of the unmapping using free_removed_table(). If the system supports FEAT_TLBIRANGE, flush the entire range that has been disconnected from the rest of the page-table. Suggested-by: Ricardo Koller Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/kvm/hyp/pgtable.c | 44 ++++++++++++++++++++++++++++++++++++ 1 file changed, 44 insertions(+) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 0858d1fa85d6b..af3729d0971f2 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1017,6 +1017,49 @@ static int stage2_unmap_walker(const struct kvm_pgta= ble_visit_ctx *ctx, return 0; } =20 +/* + * The fast walker executes only if the unmap size is exactly equal to the + * largest block mapping supported (i.e. at KVM_PGTABLE_MIN_BLOCK_LEVEL), + * such that the underneath hierarchy at KVM_PGTABLE_MIN_BLOCK_LEVEL can + * be disconnected from the rest of the page-table without the need to + * traverse all the PTEs, at all the levels, and unmap each and every one + * of them. The disconnected table is freed using free_removed_table(). + */ +static int fast_stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ct= x, + enum kvm_pgtable_walk_flags visit) +{ + struct kvm_pgtable_mm_ops *mm_ops =3D ctx->mm_ops; + kvm_pte_t *childp =3D kvm_pte_follow(ctx->old, mm_ops); + struct kvm_s2_mmu *mmu =3D ctx->arg; + + if (!kvm_pte_valid(ctx->old) || ctx->level !=3D KVM_PGTABLE_MIN_BLOCK_LEV= EL) + return 0; + + if (!stage2_try_break_pte(ctx, mmu)) + return -EAGAIN; + + /* + * Gain back a reference for stage2_unmap_walker() to free + * this table entry from KVM_PGTABLE_MIN_BLOCK_LEVEL - 1. + */ + mm_ops->get_page(ctx->ptep); + + mm_ops->free_removed_table(childp, ctx->level); + return 0; +} + +static void kvm_pgtable_try_fast_stage2_unmap(struct kvm_pgtable *pgt, u64= addr, u64 size) +{ + struct kvm_pgtable_walker walker =3D { + .cb =3D fast_stage2_unmap_walker, + .arg =3D pgt->mmu, + .flags =3D KVM_PGTABLE_WALK_TABLE_PRE, + }; + + if (size =3D=3D kvm_granule_size(KVM_PGTABLE_MIN_BLOCK_LEVEL)) + kvm_pgtable_walk(pgt, addr, size, &walker); +} + int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) { struct kvm_pgtable_walker walker =3D { @@ -1025,6 +1068,7 @@ int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt,= u64 addr, u64 size) .flags =3D KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, }; =20 + kvm_pgtable_try_fast_stage2_unmap(pgt, addr, size); return kvm_pgtable_walk(pgt, addr, size, &walker); } =20 --=20 2.39.1.519.gcb327c4b5f-goog