From nobody Sun Sep 14 07:24:15 2025 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id B33A42F1FFB; Mon, 8 Sep 2025 07:41:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757317271; cv=none; b=CEOBh3JnNMDKw9j4+VRDHG4uH0TvUlTd2C0Pca0iTkhULqutkdb9SISiYF1rCF/43w4xLLUZHAVRE0E9seXULW19M1MecEfkn8CrE9ZYO2VDhztnvITu1YUYrly/aRTNTkbW6VVsiTP8/V0nc4zun7nnLfMyC4ppcxSKXe2hZIc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757317271; c=relaxed/simple; bh=AoZR8SvPy8AOXRZjVBfCUHlDbwiaqAac3ZnmCJhxMOo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=oIEENuRAZGqi+hPYhPNtaiHcxxwc50zp9EPTyoYJMUlt58fHcljGCRWw6iI3wEAv06uUFp06telz6/y3k/U+uGP2xcizfCObkxeimx5qiiW7wV9nX3ihWQt5kgjd62ZgNnUZkPwS5n9CUqNLAiWRq6K8otE1r3YooNe3cBB6taI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A58261692; Mon, 8 Sep 2025 00:41:00 -0700 (PDT) Received: from e123572-lin.arm.com (e123572-lin.cambridge.arm.com [10.1.194.54]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5128D3F63F; Mon, 8 Sep 2025 00:41:04 -0700 (PDT) From: Kevin Brodsky To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Kevin Brodsky , Alexander Gordeev , Andreas Larsson , Andrew Morton , Boris Ostrovsky , Borislav Petkov , Catalin Marinas , Christophe Leroy , Dave Hansen , David Hildenbrand , "David S. Miller" , "H. Peter Anvin" , Ingo Molnar , Jann Horn , Juergen Gross , "Liam R. Howlett" , Lorenzo Stoakes , Madhavan Srinivasan , Michael Ellerman , Michal Hocko , Mike Rapoport , Nicholas Piggin , Peter Zijlstra , Ryan Roberts , Suren Baghdasaryan , Thomas Gleixner , Vlastimil Babka , Will Deacon , Yeoreum Yun , linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, xen-devel@lists.xenproject.org Subject: [PATCH v2 7/7] mm: update lazy_mmu documentation Date: Mon, 8 Sep 2025 08:39:31 +0100 Message-ID: <20250908073931.4159362-8-kevin.brodsky@arm.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20250908073931.4159362-1-kevin.brodsky@arm.com> References: <20250908073931.4159362-1-kevin.brodsky@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" We now support nested lazy_mmu sections on all architectures implementing the API. Update the API comment accordingly. Acked-by: Mike Rapoport (Microsoft) Signed-off-by: Kevin Brodsky Reviewed-by: Yeoreum Yun --- include/linux/pgtable.h | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index df0eb898b3fc..85cd1fdb914f 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -228,8 +228,18 @@ static inline int pmd_dirty(pmd_t pmd) * of the lazy mode. So the implementation must assume preemption may be e= nabled * and cpu migration is possible; it must take steps to be robust against = this. * (In practice, for user PTE updates, the appropriate page table lock(s) = are - * held, but for kernel PTE updates, no lock is held). Nesting is not perm= itted - * and the mode cannot be used in interrupt context. + * held, but for kernel PTE updates, no lock is held). The mode cannot be = used + * in interrupt context. + * + * Calls may be nested: an arch_{enter,leave}_lazy_mmu_mode() pair may be = called + * while the lazy MMU mode has already been enabled. An implementation sho= uld + * handle this using the state returned by enter() and taken by the matchi= ng + * leave() call; the LAZY_MMU_{DEFAULT,NESTED} flags can be used to indica= te + * whether this enter/leave pair is nested inside another or not. (It is u= p to + * the implementation to track whether the lazy MMU mode is enabled at any= point + * in time.) The expectation is that leave() will flush any batched state + * unconditionally, but only leave the lazy MMU mode if the passed state i= s not + * LAZY_MMU_NESTED. */ #ifndef __HAVE_ARCH_ENTER_LAZY_MMU_MODE typedef int lazy_mmu_state_t; --=20 2.47.0