From nobody Thu Oct 30 22:56:26 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1760517392550856.4143314788386; Wed, 15 Oct 2025 01:36:32 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1143277.1477049 (Exim 4.92) (envelope-from ) id 1v8wzX-0002zF-H0; Wed, 15 Oct 2025 08:36:15 +0000 Received: by outflank-mailman (output) from mailman id 1143277.1477049; Wed, 15 Oct 2025 08:36:15 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8wzX-0002z2-DF; Wed, 15 Oct 2025 08:36:15 +0000 Received: by outflank-mailman (input) for mailman id 1143277; Wed, 15 Oct 2025 08:36:14 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8wsL-0006Qy-Pl for xen-devel@lists.xenproject.org; Wed, 15 Oct 2025 08:28:49 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id f8864424-a9a0-11f0-980a-7dc792cee155; Wed, 15 Oct 2025 10:28:48 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 78DF9236D; Wed, 15 Oct 2025 01:28:39 -0700 (PDT) Received: from e123572-lin.arm.com (e123572-lin.cambridge.arm.com [10.1.194.54]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 754613F66E; Wed, 15 Oct 2025 01:28:42 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: f8864424-a9a0-11f0-980a-7dc792cee155 From: Kevin Brodsky To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Kevin Brodsky , Alexander Gordeev , Andreas Larsson , Andrew Morton , Boris Ostrovsky , Borislav Petkov , Catalin Marinas , Christophe Leroy , Dave Hansen , David Hildenbrand , "David S. Miller" , "H. Peter Anvin" , Ingo Molnar , Jann Horn , Juergen Gross , "Liam R. Howlett" , Lorenzo Stoakes , Madhavan Srinivasan , Michael Ellerman , Michal Hocko , Mike Rapoport , Nicholas Piggin , Peter Zijlstra , Ryan Roberts , Suren Baghdasaryan , Thomas Gleixner , Vlastimil Babka , Will Deacon , Yeoreum Yun , linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, xen-devel@lists.xenproject.org, x86@kernel.org Subject: [PATCH v3 12/13] mm: bail out of lazy_mmu_mode_* in interrupt context Date: Wed, 15 Oct 2025 09:27:26 +0100 Message-ID: <20251015082727.2395128-13-kevin.brodsky@arm.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20251015082727.2395128-1-kevin.brodsky@arm.com> References: <20251015082727.2395128-1-kevin.brodsky@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1760517395517154100 Content-Type: text/plain; charset="utf-8" The lazy MMU mode cannot be used in interrupt context. This is documented in , but isn't consistently handled across architectures. arm64 ensures that calls to lazy_mmu_mode_* have no effect in interrupt context, because such calls do occur in certain configurations - see commit b81c688426a9 ("arm64/mm: Disable barrier batching in interrupt contexts"). Other architectures do not check this situation, most likely because it hasn't occurred so far. Both arm64 and x86/Xen also ensure that any lazy MMU optimisation is disabled while in interrupt mode (see queue_pte_barriers() and xen_get_lazy_mode() respectively). Let's handle this in the new generic lazy_mmu layer, in the same fashion as arm64: bail out of lazy_mmu_mode_* if in_interrupt(), and have in_lazy_mmu_mode() return false to disable any optimisation. Also remove the arm64 handling that is now redundant; x86/Xen has its own internal tracking so it is left unchanged. Signed-off-by: Kevin Brodsky --- arch/arm64/include/asm/pgtable.h | 17 +---------------- include/linux/pgtable.h | 16 ++++++++++++++-- include/linux/sched.h | 3 +++ 3 files changed, 18 insertions(+), 18 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgta= ble.h index 944e512767db..a37f417c30be 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -62,37 +62,22 @@ static inline void emit_pte_barriers(void) =20 static inline void queue_pte_barriers(void) { - if (in_interrupt()) { - emit_pte_barriers(); - return; - } - if (in_lazy_mmu_mode()) test_and_set_thread_flag(TIF_LAZY_MMU_PENDING); else emit_pte_barriers(); } =20 -static inline void arch_enter_lazy_mmu_mode(void) -{ - if (in_interrupt()) - return; -} +static inline void arch_enter_lazy_mmu_mode(void) {} =20 static inline void arch_flush_lazy_mmu_mode(void) { - if (in_interrupt()) - return; - if (test_and_clear_thread_flag(TIF_LAZY_MMU_PENDING)) emit_pte_barriers(); } =20 static inline void arch_leave_lazy_mmu_mode(void) { - if (in_interrupt()) - return; - arch_flush_lazy_mmu_mode(); } =20 diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 269225a733de..718c9c788114 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -228,8 +228,8 @@ static inline int pmd_dirty(pmd_t pmd) * of the lazy mode. So the implementation must assume preemption may be e= nabled * and cpu migration is possible; it must take steps to be robust against = this. * (In practice, for user PTE updates, the appropriate page table lock(s) = are - * held, but for kernel PTE updates, no lock is held). The mode cannot be = used - * in interrupt context. + * held, but for kernel PTE updates, no lock is held). The mode is disabled + * in interrupt context and calls to the lazy_mmu API have no effect. * * The lazy MMU mode is enabled for a given block of code using: * @@ -265,6 +265,9 @@ static inline void lazy_mmu_mode_enable(void) { struct lazy_mmu_state *state =3D ¤t->lazy_mmu_state; =20 + if (in_interrupt()) + return; + VM_BUG_ON(state->count =3D=3D U8_MAX); /* enable() must not be called while paused */ VM_WARN_ON(state->count > 0 && !state->enabled); @@ -280,6 +283,9 @@ static inline void lazy_mmu_mode_disable(void) { struct lazy_mmu_state *state =3D ¤t->lazy_mmu_state; =20 + if (in_interrupt()) + return; + VM_BUG_ON(state->count =3D=3D 0); VM_WARN_ON(!state->enabled); =20 @@ -297,6 +303,9 @@ static inline void lazy_mmu_mode_pause(void) { struct lazy_mmu_state *state =3D ¤t->lazy_mmu_state; =20 + if (in_interrupt()) + return; + VM_WARN_ON(state->count =3D=3D 0 || !state->enabled); =20 state->enabled =3D false; @@ -307,6 +316,9 @@ static inline void lazy_mmu_mode_resume(void) { struct lazy_mmu_state *state =3D ¤t->lazy_mmu_state; =20 + if (in_interrupt()) + return; + VM_WARN_ON(state->count =3D=3D 0 || state->enabled); =20 arch_enter_lazy_mmu_mode(); diff --git a/include/linux/sched.h b/include/linux/sched.h index 2862d8bf2160..beb3e6cfddd9 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1731,6 +1731,9 @@ static inline char task_state_to_char(struct task_str= uct *tsk) #ifdef CONFIG_ARCH_LAZY_MMU static inline bool in_lazy_mmu_mode(void) { + if (in_interrupt()) + return false; + return current->lazy_mmu_state.enabled; } #else --=20 2.47.0