From nobody Thu Oct 30 22:56:26 2025 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 42DD0331A5D; Wed, 29 Oct 2025 10:10:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761732657; cv=none; b=MQBGEVJ9QVnJI61PL3b2YwaxeCBUzgNV35rwT39SPIkIxTx2S7+MMAMDo8Y+KdksslrDL22tZXWx6zidL7d2ZtHXdRwj3KsQojqKy/0ok4aQW3+aTqcR4VEBmKgtY+QjWZLQ61o1T1aOJMZkymvwAE75UA/Jx9os6fzzNp0hmWs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761732657; c=relaxed/simple; bh=01yjLrqFRp7E7OB+nBgTLlbKSHHLsP2R1YFZvmBrXZI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Gp8R8v7mQC18/oJT0qhY23vPdOl7hXGlvGERTu315QLODaQC27AHgwM6SrgHif2K2VRD1RXF0Ic41PntMjw+UqatII1ZXufQF79BmjXzkHtcKLtt2WCatHrROrcG5TuEdReKdRZrBRd9NkLSyK2qBtKLd349hIUsG1YHijqdtwk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9FBE32C1D; Wed, 29 Oct 2025 03:10:46 -0700 (PDT) Received: from e123572-lin.arm.com (e123572-lin.cambridge.arm.com [10.1.194.54]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8A7ED3F66E; Wed, 29 Oct 2025 03:10:49 -0700 (PDT) From: Kevin Brodsky To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Kevin Brodsky , Alexander Gordeev , Andreas Larsson , Andrew Morton , Boris Ostrovsky , Borislav Petkov , Catalin Marinas , Christophe Leroy , Dave Hansen , David Hildenbrand , "David S. Miller" , David Woodhouse , "H. Peter Anvin" , Ingo Molnar , Jann Horn , Juergen Gross , "Liam R. Howlett" , Lorenzo Stoakes , Madhavan Srinivasan , Michael Ellerman , Michal Hocko , Mike Rapoport , Nicholas Piggin , Peter Zijlstra , Ryan Roberts , Suren Baghdasaryan , Thomas Gleixner , Vlastimil Babka , Will Deacon , Yeoreum Yun , linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, xen-devel@lists.xenproject.org, x86@kernel.org Subject: [PATCH v4 12/12] mm: bail out of lazy_mmu_mode_* in interrupt context Date: Wed, 29 Oct 2025 10:09:09 +0000 Message-ID: <20251029100909.3381140-13-kevin.brodsky@arm.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20251029100909.3381140-1-kevin.brodsky@arm.com> References: <20251029100909.3381140-1-kevin.brodsky@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The lazy MMU mode cannot be used in interrupt context. This is documented in , but isn't consistently handled across architectures. arm64 ensures that calls to lazy_mmu_mode_* have no effect in interrupt context, because such calls do occur in certain configurations - see commit b81c688426a9 ("arm64/mm: Disable barrier batching in interrupt contexts"). Other architectures do not check this situation, most likely because it hasn't occurred so far. Both arm64 and x86/Xen also ensure that any lazy MMU optimisation is disabled while in interrupt mode (see queue_pte_barriers() and xen_get_lazy_mode() respectively). Let's handle this in the new generic lazy_mmu layer, in the same fashion as arm64: bail out of lazy_mmu_mode_* if in_interrupt(), and have in_lazy_mmu_mode() return false to disable any optimisation. Also remove the arm64 handling that is now redundant; x86/Xen has its own internal tracking so it is left unchanged. Signed-off-by: Kevin Brodsky --- arch/arm64/include/asm/pgtable.h | 17 +---------------- include/linux/pgtable.h | 16 ++++++++++++++-- include/linux/sched.h | 3 +++ 3 files changed, 18 insertions(+), 18 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgta= ble.h index 61ca88f94551..96987a49e83b 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -62,37 +62,22 @@ static inline void emit_pte_barriers(void) =20 static inline void queue_pte_barriers(void) { - if (in_interrupt()) { - emit_pte_barriers(); - return; - } - if (in_lazy_mmu_mode()) test_and_set_thread_flag(TIF_LAZY_MMU_PENDING); else emit_pte_barriers(); } =20 -static inline void arch_enter_lazy_mmu_mode(void) -{ - if (in_interrupt()) - return; -} +static inline void arch_enter_lazy_mmu_mode(void) {} =20 static inline void arch_flush_lazy_mmu_mode(void) { - if (in_interrupt()) - return; - if (test_and_clear_thread_flag(TIF_LAZY_MMU_PENDING)) emit_pte_barriers(); } =20 static inline void arch_leave_lazy_mmu_mode(void) { - if (in_interrupt()) - return; - arch_flush_lazy_mmu_mode(); } =20 diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index e6064e00b22d..e6069ce4ec83 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -228,8 +228,8 @@ static inline int pmd_dirty(pmd_t pmd) * of the lazy mode. So the implementation must assume preemption may be e= nabled * and cpu migration is possible; it must take steps to be robust against = this. * (In practice, for user PTE updates, the appropriate page table lock(s) = are - * held, but for kernel PTE updates, no lock is held). The mode cannot be = used - * in interrupt context. + * held, but for kernel PTE updates, no lock is held). The mode is disabled + * in interrupt context and calls to the lazy_mmu API have no effect. * * The lazy MMU mode is enabled for a given block of code using: * @@ -265,6 +265,9 @@ static inline void lazy_mmu_mode_enable(void) { struct lazy_mmu_state *state =3D ¤t->lazy_mmu_state; =20 + if (in_interrupt()) + return; + VM_WARN_ON_ONCE(state->nesting_level =3D=3D U8_MAX); /* enable() must not be called while paused */ VM_WARN_ON(state->nesting_level > 0 && !state->active); @@ -279,6 +282,9 @@ static inline void lazy_mmu_mode_disable(void) { struct lazy_mmu_state *state =3D ¤t->lazy_mmu_state; =20 + if (in_interrupt()) + return; + VM_WARN_ON_ONCE(state->nesting_level =3D=3D 0); VM_WARN_ON(!state->active); =20 @@ -295,6 +301,9 @@ static inline void lazy_mmu_mode_pause(void) { struct lazy_mmu_state *state =3D ¤t->lazy_mmu_state; =20 + if (in_interrupt()) + return; + VM_WARN_ON(state->nesting_level =3D=3D 0 || !state->active); =20 state->active =3D false; @@ -305,6 +314,9 @@ static inline void lazy_mmu_mode_resume(void) { struct lazy_mmu_state *state =3D ¤t->lazy_mmu_state; =20 + if (in_interrupt()) + return; + VM_WARN_ON(state->nesting_level =3D=3D 0 || state->active); =20 state->active =3D true; diff --git a/include/linux/sched.h b/include/linux/sched.h index 11566d973f42..bb873016ffcf 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1731,6 +1731,9 @@ static inline char task_state_to_char(struct task_str= uct *tsk) #ifdef CONFIG_ARCH_HAS_LAZY_MMU_MODE static inline bool in_lazy_mmu_mode(void) { + if (in_interrupt()) + return false; + return current->lazy_mmu_state.active; } #else --=20 2.47.0