From nobody Sun Feb 8 12:36:41 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09E6BC7EE43 for ; Tue, 6 Jun 2023 19:29:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239180AbjFFT3R (ORCPT ); Tue, 6 Jun 2023 15:29:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54780 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239038AbjFFT3F (ORCPT ); Tue, 6 Jun 2023 15:29:05 -0400 Received: from mail-io1-xd49.google.com (mail-io1-xd49.google.com [IPv6:2607:f8b0:4864:20::d49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D936510D1 for ; Tue, 6 Jun 2023 12:29:03 -0700 (PDT) Received: by mail-io1-xd49.google.com with SMTP id ca18e2360f4ac-7776b76cc59so562908439f.2 for ; Tue, 06 Jun 2023 12:29:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1686079743; x=1688671743; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=WFJK44bg6mTFZNh04T55o8cqW/XBy3bKKtAhRnNGyNM=; b=d70o+mmg/FlsnhdyTBCBIyehP5Om6Pi4yYAx8ru+yyhjlJj4dQB5ICb64pXkWgEIks zKDhwL4Vu2j6CIbfJTHkJeB6o4e6AE+P95KDOo1cPZtFK41yeFXHDkV479VAHm1k95Da EfdPWWpizZJo4KoZQvbTPMHWFMWOfhDMYax306cTGzHURmuxfF3y3EuxDxZ4M6zzm4yE ACvMInyOE9fEjfp3nb/+yUAmMe85XXR/7ims7FSGsxkN/9pZwB60OoHuGbWNXFiUq5gq r/gsGyEq4rnUhc1sYYx1YmTufkR+WCSQN8FBaCZ3SeFPQqhBnquezv46nGxcZw4V6dOc rczg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686079743; x=1688671743; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=WFJK44bg6mTFZNh04T55o8cqW/XBy3bKKtAhRnNGyNM=; b=ZnWJKTXUFD3Hd2sgiXUoi7YddItZye/CDZ+YIpo3AUl+izbLVdqHrE/rI2w6UJRMD+ 597xaCsI0ezT5rOSs/yc5y77s3k4A4omQPxJutdQ9RhC7COm/yq7lA/hjsP+XceeJS2i 0mlGI1jmqUOgRMDxOe9lOKVsx6Lm2RANPlri4jbvrBiscHGyalHsmWeQa2/Uzi/6zzjy 4QsEdUi7Kto0ZgH8Ou1HiBnN2Mf2VAWM2UEZqbf5utecVFzPb2xBWDrdIW56BzVGcTDW kMfNOBd8MzMWqkVxV5Ukl81GrsMCf9ERI1B6jsUDANm/hoVlSUvKTTCytBAjvUzAUm5n Jadg== X-Gm-Message-State: AC+VfDyKBXFN1OCCTNtGBfOafuZnpmNy/fEvttaRZTeOT3/SexoRogPu toHVL/e6igu3a4dGk0dlhaa/SWRcEk4P X-Google-Smtp-Source: ACHHUZ62Hug8P99peZzbJUWEuxmHtWhbRFuSd2AYuFgkpN4/A/feJCg3inpG9Gm+vzkRn3a+VIu54mcG95EX X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a6b:6102:0:b0:774:8f36:bb8e with SMTP id v2-20020a6b6102000000b007748f36bb8emr1571026iob.2.1686079743400; Tue, 06 Jun 2023 12:29:03 -0700 (PDT) Date: Tue, 6 Jun 2023 19:28:52 +0000 In-Reply-To: <20230606192858.3600174-1-rananta@google.com> Mime-Version: 1.0 References: <20230606192858.3600174-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230606192858.3600174-2-rananta@google.com> Subject: [PATCH v5 1/7] arm64: tlb: Refactor the core flush algorithm of __flush_tlb_range From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Catalin Marinas Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently, the core TLB flush functionality of __flush_tlb_range() hardcodes vae1is (and variants) for the flush operation. In the upcoming patches, the KVM code reuses this core algorithm with ipas2e1is for range based TLB invalidations based on the IPA. Hence, extract the core flush functionality of __flush_tlb_range() into its own macro that accepts an 'op' argument to pass any TLBI operation, such that other callers (KVM) can benefit. No functional changes intended. Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Catalin Marinas --- arch/arm64/include/asm/tlbflush.h | 108 +++++++++++++++--------------- 1 file changed, 55 insertions(+), 53 deletions(-) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlb= flush.h index 412a3b9a3c25d..4775378b6da1b 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -278,14 +278,61 @@ static inline void flush_tlb_page(struct vm_area_stru= ct *vma, */ #define MAX_TLBI_OPS PTRS_PER_PTE =20 +/* When the CPU does not support TLB range operations, flush the TLB + * entries one by one at the granularity of 'stride'. If the TLB + * range ops are supported, then: + * + * 1. If 'pages' is odd, flush the first page through non-range + * operations; + * + * 2. For remaining pages: the minimum range granularity is decided + * by 'scale', so multiple range TLBI operations may be required. + * Start from scale =3D 0, flush the corresponding number of pages + * ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it + * until no pages left. + * + * Note that certain ranges can be represented by either num =3D 31 and + * scale or num =3D 0 and scale + 1. The loop below favours the latter + * since num is limited to 30 by the __TLBI_RANGE_NUM() macro. + */ +#define __flush_tlb_range_op(op, start, pages, stride, \ + asid, tlb_level, tlbi_user) do { \ + int num =3D 0; \ + int scale =3D 0; \ + unsigned long addr; \ + \ + while (pages > 0) { \ + if (!system_supports_tlb_range() || \ + pages % 2 =3D=3D 1) { \ + addr =3D __TLBI_VADDR(start, asid); \ + __tlbi_level(op, addr, tlb_level); \ + if (tlbi_user) \ + __tlbi_user_level(op, addr, tlb_level); \ + start +=3D stride; \ + pages -=3D stride >> PAGE_SHIFT; \ + continue; \ + } \ + \ + num =3D __TLBI_RANGE_NUM(pages, scale); \ + if (num >=3D 0) { \ + addr =3D __TLBI_VADDR_RANGE(start, asid, scale, \ + num, tlb_level); \ + __tlbi(r##op, addr); \ + if (tlbi_user) \ + __tlbi_user(r##op, addr); \ + start +=3D __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \ + pages -=3D __TLBI_RANGE_PAGES(num, scale); \ + } \ + scale++; \ + } \ +} while (0) + static inline void __flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end, unsigned long stride, bool last_level, int tlb_level) { - int num =3D 0; - int scale =3D 0; - unsigned long asid, addr, pages; + unsigned long asid, pages; =20 start =3D round_down(start, stride); end =3D round_up(end, stride); @@ -307,56 +354,11 @@ static inline void __flush_tlb_range(struct vm_area_s= truct *vma, dsb(ishst); asid =3D ASID(vma->vm_mm); =20 - /* - * When the CPU does not support TLB range operations, flush the TLB - * entries one by one at the granularity of 'stride'. If the TLB - * range ops are supported, then: - * - * 1. If 'pages' is odd, flush the first page through non-range - * operations; - * - * 2. For remaining pages: the minimum range granularity is decided - * by 'scale', so multiple range TLBI operations may be required. - * Start from scale =3D 0, flush the corresponding number of pages - * ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it - * until no pages left. - * - * Note that certain ranges can be represented by either num =3D 31 and - * scale or num =3D 0 and scale + 1. The loop below favours the latter - * since num is limited to 30 by the __TLBI_RANGE_NUM() macro. - */ - while (pages > 0) { - if (!system_supports_tlb_range() || - pages % 2 =3D=3D 1) { - addr =3D __TLBI_VADDR(start, asid); - if (last_level) { - __tlbi_level(vale1is, addr, tlb_level); - __tlbi_user_level(vale1is, addr, tlb_level); - } else { - __tlbi_level(vae1is, addr, tlb_level); - __tlbi_user_level(vae1is, addr, tlb_level); - } - start +=3D stride; - pages -=3D stride >> PAGE_SHIFT; - continue; - } - - num =3D __TLBI_RANGE_NUM(pages, scale); - if (num >=3D 0) { - addr =3D __TLBI_VADDR_RANGE(start, asid, scale, - num, tlb_level); - if (last_level) { - __tlbi(rvale1is, addr); - __tlbi_user(rvale1is, addr); - } else { - __tlbi(rvae1is, addr); - __tlbi_user(rvae1is, addr); - } - start +=3D __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; - pages -=3D __TLBI_RANGE_PAGES(num, scale); - } - scale++; - } + if (last_level) + __flush_tlb_range_op(vale1is, start, pages, stride, asid, tlb_level, tru= e); + else + __flush_tlb_range_op(vae1is, start, pages, stride, asid, tlb_level, true= ); + dsb(ish); } =20 --=20 2.41.0.rc0.172.g3f132b7071-goog