From nobody Thu Oct 30 22:56:28 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1760516903922834.4985685793998; Wed, 15 Oct 2025 01:28:23 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1143209.1476991 (Exim 4.92) (envelope-from ) id 1v8wre-00070X-DD; Wed, 15 Oct 2025 08:28:06 +0000 Received: by outflank-mailman (output) from mailman id 1143209.1476991; Wed, 15 Oct 2025 08:28:06 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8wre-00070Q-8T; Wed, 15 Oct 2025 08:28:06 +0000 Received: by outflank-mailman (input) for mailman id 1143209; Wed, 15 Oct 2025 08:28:05 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8wrd-0005tz-Bz for xen-devel@lists.xenproject.org; Wed, 15 Oct 2025 08:28:05 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id debc29a0-a9a0-11f0-9d15-b5c5bf9af7f9; Wed, 15 Oct 2025 10:28:04 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 36B1D22F8; Wed, 15 Oct 2025 01:27:56 -0700 (PDT) Received: from e123572-lin.arm.com (e123572-lin.cambridge.arm.com [10.1.194.54]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 361C63F66E; Wed, 15 Oct 2025 01:27:59 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: debc29a0-a9a0-11f0-9d15-b5c5bf9af7f9 From: Kevin Brodsky To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Kevin Brodsky , Alexander Gordeev , Andreas Larsson , Andrew Morton , Boris Ostrovsky , Borislav Petkov , Catalin Marinas , Christophe Leroy , Dave Hansen , David Hildenbrand , "David S. Miller" , "H. Peter Anvin" , Ingo Molnar , Jann Horn , Juergen Gross , "Liam R. Howlett" , Lorenzo Stoakes , Madhavan Srinivasan , Michael Ellerman , Michal Hocko , Mike Rapoport , Nicholas Piggin , Peter Zijlstra , Ryan Roberts , Suren Baghdasaryan , Thomas Gleixner , Vlastimil Babka , Will Deacon , Yeoreum Yun , linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, xen-devel@lists.xenproject.org, x86@kernel.org Subject: [PATCH v3 04/13] sparc/mm: implement arch_flush_lazy_mmu_mode() Date: Wed, 15 Oct 2025 09:27:18 +0100 Message-ID: <20251015082727.2395128-5-kevin.brodsky@arm.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20251015082727.2395128-1-kevin.brodsky@arm.com> References: <20251015082727.2395128-1-kevin.brodsky@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1760516905420158500 Content-Type: text/plain; charset="utf-8" Upcoming changes to the lazy_mmu API will cause arch_flush_lazy_mmu_mode() to be called when leaving a nested lazy_mmu section. Move the relevant logic from arch_leave_lazy_mmu_mode() to arch_flush_lazy_mmu_mode() and have the former call the latter. Signed-off-by: Kevin Brodsky --- arch/sparc/include/asm/tlbflush_64.h | 2 +- arch/sparc/mm/tlb.c | 9 ++++++++- 2 files changed, 9 insertions(+), 2 deletions(-) diff --git a/arch/sparc/include/asm/tlbflush_64.h b/arch/sparc/include/asm/= tlbflush_64.h index 8b8cdaa69272..925bb5d7a4e1 100644 --- a/arch/sparc/include/asm/tlbflush_64.h +++ b/arch/sparc/include/asm/tlbflush_64.h @@ -43,8 +43,8 @@ void flush_tlb_kernel_range(unsigned long start, unsigned= long end); =20 void flush_tlb_pending(void); void arch_enter_lazy_mmu_mode(void); +void arch_flush_lazy_mmu_mode(void); void arch_leave_lazy_mmu_mode(void); -#define arch_flush_lazy_mmu_mode() do {} while (0) =20 /* Local cpu only. */ void __flush_tlb_all(void); diff --git a/arch/sparc/mm/tlb.c b/arch/sparc/mm/tlb.c index a35ddcca5e76..7b5dfcdb1243 100644 --- a/arch/sparc/mm/tlb.c +++ b/arch/sparc/mm/tlb.c @@ -59,12 +59,19 @@ void arch_enter_lazy_mmu_mode(void) tb->active =3D 1; } =20 -void arch_leave_lazy_mmu_mode(void) +void arch_flush_lazy_mmu_mode(void) { struct tlb_batch *tb =3D this_cpu_ptr(&tlb_batch); =20 if (tb->tlb_nr) flush_tlb_pending(); +} + +void arch_leave_lazy_mmu_mode(void) +{ + struct tlb_batch *tb =3D this_cpu_ptr(&tlb_batch); + + arch_flush_lazy_mmu_mode(); tb->active =3D 0; preempt_enable(); } --=20 2.47.0