From nobody Thu Oct 30 22:56:26 2025 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 918193126B5; Wed, 29 Oct 2025 10:10:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761732646; cv=none; b=NKep16yQ9eRuWRIEEyXsgqC/z3HxYifmCP2xoSOpg0PJ3mHFi71q5riILve+dVe2Vz79+YTZ+1Ic4ETX292WKimZV1XJTuanEt7TcBIAMUGjMjqA8oLHOg9tdDECLzT1PqvN00dKjQEJwPKAFDz9iFSrFaUvYa+upiXObDOWQ9k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761732646; c=relaxed/simple; bh=ixIAZu9J5azrQ9F1C2k29ZjJlFTcebNgRx+mkb2Rj18=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=g9kvii3gP1Tl/B+sMRPLxZ0fTaCt0qntB5MMgDjTAp2LM33qAQJD3Kd4mz9c8oSyN74u06j0H9MxUi8S8VObLryaWgSuNFx0TlOPSiENH0BDCn0VGNSkvvNDfXt4KRZke4Sx8uYVoXTbjVV+nBAUtUyx9xgppwKBwHoOU6jn7rE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 310A22BF7; Wed, 29 Oct 2025 03:10:36 -0700 (PDT) Received: from e123572-lin.arm.com (e123572-lin.cambridge.arm.com [10.1.194.54]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1CBC73F66E; Wed, 29 Oct 2025 03:10:38 -0700 (PDT) From: Kevin Brodsky To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Kevin Brodsky , Alexander Gordeev , Andreas Larsson , Andrew Morton , Boris Ostrovsky , Borislav Petkov , Catalin Marinas , Christophe Leroy , Dave Hansen , David Hildenbrand , "David S. Miller" , David Woodhouse , "H. Peter Anvin" , Ingo Molnar , Jann Horn , Juergen Gross , "Liam R. Howlett" , Lorenzo Stoakes , Madhavan Srinivasan , Michael Ellerman , Michal Hocko , Mike Rapoport , Nicholas Piggin , Peter Zijlstra , Ryan Roberts , Suren Baghdasaryan , Thomas Gleixner , Vlastimil Babka , Will Deacon , Yeoreum Yun , linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, xen-devel@lists.xenproject.org, x86@kernel.org Subject: [PATCH v4 10/12] sparc/mm: replace batch->active with in_lazy_mmu_mode() Date: Wed, 29 Oct 2025 10:09:07 +0000 Message-ID: <20251029100909.3381140-11-kevin.brodsky@arm.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20251029100909.3381140-1-kevin.brodsky@arm.com> References: <20251029100909.3381140-1-kevin.brodsky@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" A per-CPU batch struct is activated when entering lazy MMU mode; its lifetime is the same as the lazy MMU section (it is deactivated when leaving the mode). Preemption is disabled in that interval to ensure that the per-CPU reference remains valid. The generic lazy_mmu layer now tracks whether a task is in lazy MMU mode. We can therefore use the generic helper in_lazy_mmu_mode() to tell whether a batch struct is active instead of tracking it explicitly. Signed-off-by: Kevin Brodsky --- arch/sparc/include/asm/tlbflush_64.h | 1 - arch/sparc/mm/tlb.c | 9 +-------- 2 files changed, 1 insertion(+), 9 deletions(-) diff --git a/arch/sparc/include/asm/tlbflush_64.h b/arch/sparc/include/asm/= tlbflush_64.h index 4e1036728e2f..6133306ba59a 100644 --- a/arch/sparc/include/asm/tlbflush_64.h +++ b/arch/sparc/include/asm/tlbflush_64.h @@ -12,7 +12,6 @@ struct tlb_batch { unsigned int hugepage_shift; struct mm_struct *mm; unsigned long tlb_nr; - unsigned long active; unsigned long vaddrs[TLB_BATCH_NR]; }; =20 diff --git a/arch/sparc/mm/tlb.c b/arch/sparc/mm/tlb.c index 7b5dfcdb1243..879e22c86e5c 100644 --- a/arch/sparc/mm/tlb.c +++ b/arch/sparc/mm/tlb.c @@ -52,11 +52,7 @@ void flush_tlb_pending(void) =20 void arch_enter_lazy_mmu_mode(void) { - struct tlb_batch *tb; - preempt_disable(); - tb =3D this_cpu_ptr(&tlb_batch); - tb->active =3D 1; } =20 void arch_flush_lazy_mmu_mode(void) @@ -69,10 +65,7 @@ void arch_flush_lazy_mmu_mode(void) =20 void arch_leave_lazy_mmu_mode(void) { - struct tlb_batch *tb =3D this_cpu_ptr(&tlb_batch); - arch_flush_lazy_mmu_mode(); - tb->active =3D 0; preempt_enable(); } =20 @@ -93,7 +86,7 @@ static void tlb_batch_add_one(struct mm_struct *mm, unsig= ned long vaddr, nr =3D 0; } =20 - if (!tb->active) { + if (!in_lazy_mmu_mode()) { flush_tsb_user_page(mm, vaddr, hugepage_shift); global_flush_tlb_page(mm, vaddr); goto out; --=20 2.47.0