From nobody Thu Dec 18 20:17:22 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE74BEE57CD for ; Sat, 9 Sep 2023 20:17:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244826AbjIIUR5 (ORCPT ); Sat, 9 Sep 2023 16:17:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43006 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237086AbjIIURz (ORCPT ); Sat, 9 Sep 2023 16:17:55 -0400 Received: from wout4-smtp.messagingengine.com (wout4-smtp.messagingengine.com [64.147.123.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A2F00CE7 for ; Sat, 9 Sep 2023 13:17:43 -0700 (PDT) Received: from compute6.internal (compute6.nyi.internal [10.202.2.47]) by mailout.west.internal (Postfix) with ESMTP id A23CD3200805; Sat, 9 Sep 2023 16:17:42 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute6.internal (MEProxy); Sat, 09 Sep 2023 16:17:43 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sholland.org; h= cc:cc:content-transfer-encoding:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm2; t=1694290662; x= 1694377062; bh=vYgeei/BDcJ6bUB2DAGppW0ND32ofla8o2DCj1NxYPE=; b=K u0dd0RZHMaDqMns7iRitHO88BySItmXc1V/+Vb/OkmR3KGcO1DvaRAtq3OLHETze naLsbxraep105inN1mW8i3aVWoN2CHwYKGEBnzeDVwtB/CY825a4Z6xplHeKufSB j5nFyWllPSvSBWdcnnya2PQa8PQnYYZV8XJ5rJi+x/EQ026aTvbwXVV8VdsVpeGh K6pNLRO3moFbl1eU9gKfEAOCfyD/8MKFiMMKp6OiMj6Z78ZchXerkP5VT2pQbY1v Kah1XXspL8hkSGENcggXYwqoYZTnzsPBZM5B7wK0UGfXDstVHgvLEPFUuxcvWJ4X I7OZP8RqM2VGeFapjE9CA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t=1694290662; x= 1694377062; bh=vYgeei/BDcJ6bUB2DAGppW0ND32ofla8o2DCj1NxYPE=; b=J 6plQnuSCnXtX5IBcgF5KYZGLQQG5MhLcW0wynftAZ0np5C/3SOw1d4GtGQhHZoeA RUCoGTMdxPBabmUAs8z9fzJypFl0W39j4LSRzE/kj+YtSJ+y2/NJxCWOMlC+OiBh E06QkMy8GJfKVsumpoaI812sFBrDYMDbN7JgsvEuVRLEJ8/imdo2h3gHSfEr1mtS U5OdXOVeblb2nv0zlpwDBRQC818RhI7+SAipU2efh9earF6Cmks7jZbChNmnwBYD Vd/3hzPGOd+tI1EvQdpIoT/1VmuvUeJxT7/xF3v23ep25CoStRL7VhCvoOIJo699 CEAt1LarUDmRPpTBESL8w== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedviedrudehledgudegiecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpefurghm uhgvlhcujfholhhlrghnugcuoehsrghmuhgvlhesshhhohhllhgrnhgurdhorhhgqeenuc ggtffrrghtthgvrhhnpedukeetueduhedtleetvefguddvvdejhfefudelgfduveeggeeh gfdufeeitdevteenucevlhhushhtvghrufhiiigvpedunecurfgrrhgrmhepmhgrihhlfh hrohhmpehsrghmuhgvlhesshhhohhllhgrnhgurdhorhhg X-ME-Proxy: Feedback-ID: i0ad843c9:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sat, 9 Sep 2023 16:17:41 -0400 (EDT) From: Samuel Holland To: Palmer Dabbelt , Alexandre Ghiti , linux-riscv@lists.infradead.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Samuel Holland Subject: [PATCH 6/7] riscv: mm: Always flush a single MM context by ASID Date: Sat, 9 Sep 2023 15:16:34 -0500 Message-ID: <20230909201727.10909-7-samuel@sholland.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230909201727.10909-1-samuel@sholland.org> References: <20230909201727.10909-1-samuel@sholland.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Even if ASIDs are not supported, using the single-ASID variant of the sfence.vma instruction preserves TLB entries for global (kernel) pages. So it is always most efficient to use the single-ASID code path. Signed-off-by: Samuel Holland --- arch/riscv/include/asm/mmu_context.h | 2 - arch/riscv/include/asm/tlbflush.h | 11 +++-- arch/riscv/mm/context.c | 3 +- arch/riscv/mm/tlbflush.c | 68 ++++++---------------------- 4 files changed, 24 insertions(+), 60 deletions(-) diff --git a/arch/riscv/include/asm/mmu_context.h b/arch/riscv/include/asm/= mmu_context.h index 7030837adc1a..b0659413a080 100644 --- a/arch/riscv/include/asm/mmu_context.h +++ b/arch/riscv/include/asm/mmu_context.h @@ -33,8 +33,6 @@ static inline int init_new_context(struct task_struct *ts= k, return 0; } =20 -DECLARE_STATIC_KEY_FALSE(use_asid_allocator); - #include =20 #endif /* _ASM_RISCV_MMU_CONTEXT_H */ diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlb= flush.h index e55831edfc19..ba27cf68b170 100644 --- a/arch/riscv/include/asm/tlbflush.h +++ b/arch/riscv/include/asm/tlbflush.h @@ -54,13 +54,18 @@ void flush_pmd_tlb_range(struct vm_area_struct *vma, un= signed long start, #define flush_tlb_all() local_flush_tlb_all() #define flush_tlb_page(vma, addr) local_flush_tlb_page(addr) =20 +static inline void flush_tlb_mm(struct mm_struct *mm) +{ + unsigned long asid =3D cntx2asid(atomic_long_read(&mm->context.id)); + + local_flush_tlb_all_asid(asid); +} + static inline void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { - local_flush_tlb_all(); + flush_tlb_mm(vma->vm_mm); } - -#define flush_tlb_mm(mm) flush_tlb_all() #endif /* !CONFIG_SMP || !CONFIG_MMU */ =20 /* Flush a range of kernel pages */ diff --git a/arch/riscv/mm/context.c b/arch/riscv/mm/context.c index 3ca9b653df7d..20057085ab8a 100644 --- a/arch/riscv/mm/context.c +++ b/arch/riscv/mm/context.c @@ -18,8 +18,7 @@ =20 #ifdef CONFIG_MMU =20 -DEFINE_STATIC_KEY_FALSE(use_asid_allocator); - +static DEFINE_STATIC_KEY_FALSE(use_asid_allocator); static unsigned long num_asids; =20 static atomic_long_t current_version; diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c index 54c3e70ccd81..56c2d40681a2 100644 --- a/arch/riscv/mm/tlbflush.c +++ b/arch/riscv/mm/tlbflush.c @@ -6,15 +6,6 @@ #include #include =20 -static inline void local_flush_tlb_range(unsigned long start, - unsigned long size, unsigned long stride) -{ - if (size <=3D stride) - local_flush_tlb_page(start); - else - local_flush_tlb_all(); -} - static inline void local_flush_tlb_range_asid(unsigned long start, unsigned long size, unsigned long stride, unsigned long asid) { @@ -51,62 +42,33 @@ static void __ipi_flush_tlb_range_asid(void *info) local_flush_tlb_range_asid(d->start, d->size, d->stride, d->asid); } =20 -static void __ipi_flush_tlb_range(void *info) -{ - struct flush_tlb_range_data *d =3D info; - - local_flush_tlb_range(d->start, d->size, d->stride); -} - static void __flush_tlb_range(struct mm_struct *mm, unsigned long start, unsigned long size, unsigned long stride) { + unsigned long asid =3D cntx2asid(atomic_long_read(&mm->context.id)); struct flush_tlb_range_data ftd; struct cpumask *cmask =3D mm_cpumask(mm); unsigned int cpuid; - bool broadcast; =20 if (cpumask_empty(cmask)) return; =20 cpuid =3D get_cpu(); /* check if the tlbflush needs to be sent to other CPUs */ - broadcast =3D cpumask_any_but(cmask, cpuid) < nr_cpu_ids; - if (static_branch_unlikely(&use_asid_allocator)) { - unsigned long asid =3D cntx2asid(atomic_long_read(&mm->context.id)); - - if (broadcast) { - if (riscv_use_ipi_for_rfence()) { - ftd.asid =3D asid; - ftd.start =3D start; - ftd.size =3D size; - ftd.stride =3D stride; - on_each_cpu_mask(cmask, - __ipi_flush_tlb_range_asid, - &ftd, 1); - } else - sbi_remote_sfence_vma_asid(cmask, - start, size, asid); - } else { - local_flush_tlb_range_asid(start, size, stride, asid); - } - } else { - if (broadcast) { - if (riscv_use_ipi_for_rfence()) { - ftd.asid =3D 0; - ftd.start =3D start; - ftd.size =3D size; - ftd.stride =3D stride; - on_each_cpu_mask(cmask, - __ipi_flush_tlb_range, - &ftd, 1); - } else - sbi_remote_sfence_vma(cmask, start, size); - } else { - local_flush_tlb_range(start, size, stride); - } - } - + if (cpumask_any_but(cmask, cpuid) < nr_cpu_ids) { + if (riscv_use_ipi_for_rfence()) { + ftd.asid =3D asid; + ftd.start =3D start; + ftd.size =3D size; + ftd.stride =3D stride; + on_each_cpu_mask(cmask, + __ipi_flush_tlb_range_asid, + &ftd, 1); + } else + sbi_remote_sfence_vma_asid(cmask, + start, size, asid); + } else + local_flush_tlb_range_asid(start, size, stride, asid); put_cpu(); } =20 --=20 2.41.0