From nobody Wed Dec 17 06:12:18 2025 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id F3FE8332902; Mon, 15 Dec 2025 15:03:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765811036; cv=none; b=cywF5qMUI6qotBHIZtS1l/HyDd7Yi3KREmWCvYOh1u/hd03+Epb6CxSLg/vKUNPAxkx8t7QnnSzZiowzIt2hc44/g8pmsPGSeBckVUAlEjkhHZzSQ96DRKSeq0HsCo4T3hWHXGXqLDIs+b5uDqK2e3AFlsDRrzu2HIPB9HAi6BU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765811036; c=relaxed/simple; bh=b3ghA7FymwAlaXb+hvBmBZL3S7aJ6Kwth1qbl25PPeI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PjOkvpFpzxT+ZFTjTKTSFjlwLT84FdSew4Bpoa/UNkjEq9mjvFY7jYNQc1rOXWT555FCr0SjUucisXy6Pm/zNNEKcrvAsndW+INJPuwTRLrj4MLbK4Gf4Hj+tqbqU2xxlVknzHWW4AsFxDlfUmo5x6J7QyFjOwwj7/oyTxUNPYo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1DD31165C; Mon, 15 Dec 2025 07:03:47 -0800 (PST) Received: from e123572-lin.arm.com (e123572-lin.cambridge.arm.com [10.1.194.54]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id F099A3F73B; Mon, 15 Dec 2025 07:03:48 -0800 (PST) From: Kevin Brodsky To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Kevin Brodsky , Alexander Gordeev , Andreas Larsson , Andrew Morton , Anshuman Khandual , Boris Ostrovsky , Borislav Petkov , Catalin Marinas , Christophe Leroy , Dave Hansen , David Hildenbrand , "David S. Miller" , David Woodhouse , "H. Peter Anvin" , Ingo Molnar , Jann Horn , Juergen Gross , "Liam R. Howlett" , Lorenzo Stoakes , Madhavan Srinivasan , Michael Ellerman , Michal Hocko , Mike Rapoport , Nicholas Piggin , Peter Zijlstra , "Ritesh Harjani (IBM)" , Ryan Roberts , Suren Baghdasaryan , Thomas Gleixner , Venkat Rao Bagalkote , Vlastimil Babka , Will Deacon , Yeoreum Yun , linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, xen-devel@lists.xenproject.org, x86@kernel.org Subject: [PATCH v6 03/14] powerpc/mm: implement arch_flush_lazy_mmu_mode() Date: Mon, 15 Dec 2025 15:03:12 +0000 Message-ID: <20251215150323.2218608-4-kevin.brodsky@arm.com> X-Mailer: git-send-email 2.51.2 In-Reply-To: <20251215150323.2218608-1-kevin.brodsky@arm.com> References: <20251215150323.2218608-1-kevin.brodsky@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Upcoming changes to the lazy_mmu API will cause arch_flush_lazy_mmu_mode() to be called when leaving a nested lazy_mmu section. Move the relevant logic from arch_leave_lazy_mmu_mode() to arch_flush_lazy_mmu_mode() and have the former call the latter. The radix_enabled() check is required in both as arch_flush_lazy_mmu_mode() will be called directly from the generic layer in a subsequent patch. Note: the additional this_cpu_ptr() and radix_enabled() calls on the arch_leave_lazy_mmu_mode() path will be removed in a subsequent patch. Acked-by: David Hildenbrand Tested-by: Venkat Rao Bagalkote Signed-off-by: Kevin Brodsky Reviewed-by: Ritesh Harjani (IBM) --- .../powerpc/include/asm/book3s/64/tlbflush-hash.h | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h b/arch/powe= rpc/include/asm/book3s/64/tlbflush-hash.h index 146287d9580f..2d45f57df169 100644 --- a/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h +++ b/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h @@ -41,7 +41,7 @@ static inline void arch_enter_lazy_mmu_mode(void) batch->active =3D 1; } =20 -static inline void arch_leave_lazy_mmu_mode(void) +static inline void arch_flush_lazy_mmu_mode(void) { struct ppc64_tlb_batch *batch; =20 @@ -51,12 +51,21 @@ static inline void arch_leave_lazy_mmu_mode(void) =20 if (batch->index) __flush_tlb_pending(batch); +} + +static inline void arch_leave_lazy_mmu_mode(void) +{ + struct ppc64_tlb_batch *batch; + + if (radix_enabled()) + return; + batch =3D this_cpu_ptr(&ppc64_tlb_batch); + + arch_flush_lazy_mmu_mode(); batch->active =3D 0; preempt_enable(); } =20 -#define arch_flush_lazy_mmu_mode() do {} while (0) - extern void hash__tlbiel_all(unsigned int action); =20 extern void flush_hash_page(unsigned long vpn, real_pte_t pte, int psize, --=20 2.51.2