From nobody Sun Apr 5 19:41:47 2026 Received: from out30-124.freemail.mail.aliyun.com (out30-124.freemail.mail.aliyun.com [115.124.30.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 42F3D371CFE for ; Fri, 6 Mar 2026 06:44:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772779451; cv=none; b=G79deRkz3RVEc5ccfrdiKHaMy2tjuweekPapzDUG933KZY1Yjyq8nrK83HugchuR70r8zD9qhC67P/zTlgbIuYXqkKgLGeR1+iHqqu5fiWEle/jkX+S12RIMcKmR9pFxzWyHbYj9/3pB+JtIyoIyIO8qR17vTQRYde5EnR1wVZM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772779451; c=relaxed/simple; bh=5gafTMGXBhEChDv6GbmdHuAM+N7vUP64tNgj0/Vi3Qs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JwS3mKyJ++iNFaadLjrElPkzNkxD9UG107rI+RV/pW2pJRmDMLnrNrGARYimqtPk1HChKmdpE+EqSWLf7hU3+YDeTKbbP0s0BAYii9kRUoo2mRsDrDe1+5JcdTO9KA6puSKK0EWTkmD3tqhXtV8mvhRr3CvTKR+HA9QGK6t0XHE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=yGs0RTIR; arc=none smtp.client-ip=115.124.30.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="yGs0RTIR" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1772779441; h=From:To:Subject:Date:Message-ID:MIME-Version; bh=Ecy1vzScZYswnEb3G5QPIiR13hJr9V0CgwG6PHTVyyM=; b=yGs0RTIR7YpYyRWG8fSBgdzNaxexVjOws7XT+chogJetHgcd8jHhzsD95aJjfB1zOwacZBKiPver2uX8KbygrsTmr5aVO2yzGTKG7xBphtMuaxmQFmLTs0s9O5n2ntKjhOslti9wFHNFHyEApQY8HYab68xUj+zmVhikCIpQVLk= Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0X-MLooA_1772779438 cluster:ay36) by smtp.aliyun-inc.com; Fri, 06 Mar 2026 14:43:59 +0800 From: Baolin Wang To: akpm@linux-foundation.org, david@kernel.org Cc: catalin.marinas@arm.com, will@kernel.org, lorenzo.stoakes@oracle.com, ryan.roberts@arm.com, Liam.Howlett@oracle.com, vbabka@suse.cz, rppt@kernel.org, surenb@google.com, mhocko@suse.com, riel@surriel.com, harry.yoo@oracle.com, jannh@google.com, willy@infradead.org, baohua@kernel.org, dev.jain@arm.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, hannes@cmpxchg.org, zhengqi.arch@bytedance.com, shakeel.butt@linux.dev, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 1/6] mm: use inline helper functions instead of ugly macros Date: Fri, 6 Mar 2026 14:43:37 +0800 Message-ID: X-Mailer: git-send-email 2.47.3 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" People have already complained that these *_clear_young_notify() related macros are very ugly, so let's use inline helpers to make them more readabl= e. In addition, we cannot implement these inline helper functions in the mmu_notifier.h file, because some arch-specific files will include the mmu_notifier.h, which introduces header compilation dependencies and causes build errors (e.g., arch/arm64/include/asm/tlbflush.h). Moreover, since these functions are only used in the mm, implementing these inline helpers in the mm/internal.h header seems reasonable. Reviewed-by: Rik van Riel Reviewed-by: Barry Song Acked-by: David Hildenbrand (Arm) Signed-off-by: Baolin Wang --- include/linux/mmu_notifier.h | 54 ------------------------------------ mm/internal.h | 52 ++++++++++++++++++++++++++++++++++ 2 files changed, 52 insertions(+), 54 deletions(-) diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h index 8450e18a87c2..3705d350c863 100644 --- a/include/linux/mmu_notifier.h +++ b/include/linux/mmu_notifier.h @@ -516,55 +516,6 @@ static inline void mmu_notifier_range_init_owner( range->owner =3D owner; } =20 -#define clear_flush_young_ptes_notify(__vma, __address, __ptep, __nr) \ -({ \ - int __young; \ - struct vm_area_struct *___vma =3D __vma; \ - unsigned long ___address =3D __address; \ - unsigned int ___nr =3D __nr; \ - __young =3D clear_flush_young_ptes(___vma, ___address, __ptep, ___nr); \ - __young |=3D mmu_notifier_clear_flush_young(___vma->vm_mm, \ - ___address, \ - ___address + \ - ___nr * PAGE_SIZE); \ - __young; \ -}) - -#define pmdp_clear_flush_young_notify(__vma, __address, __pmdp) \ -({ \ - int __young; \ - struct vm_area_struct *___vma =3D __vma; \ - unsigned long ___address =3D __address; \ - __young =3D pmdp_clear_flush_young(___vma, ___address, __pmdp); \ - __young |=3D mmu_notifier_clear_flush_young(___vma->vm_mm, \ - ___address, \ - ___address + \ - PMD_SIZE); \ - __young; \ -}) - -#define ptep_clear_young_notify(__vma, __address, __ptep) \ -({ \ - int __young; \ - struct vm_area_struct *___vma =3D __vma; \ - unsigned long ___address =3D __address; \ - __young =3D ptep_test_and_clear_young(___vma, ___address, __ptep);\ - __young |=3D mmu_notifier_clear_young(___vma->vm_mm, ___address, \ - ___address + PAGE_SIZE); \ - __young; \ -}) - -#define pmdp_clear_young_notify(__vma, __address, __pmdp) \ -({ \ - int __young; \ - struct vm_area_struct *___vma =3D __vma; \ - unsigned long ___address =3D __address; \ - __young =3D pmdp_test_and_clear_young(___vma, ___address, __pmdp);\ - __young |=3D mmu_notifier_clear_young(___vma->vm_mm, ___address, \ - ___address + PMD_SIZE); \ - __young; \ -}) - #else /* CONFIG_MMU_NOTIFIER */ =20 struct mmu_notifier_range { @@ -652,11 +603,6 @@ static inline void mmu_notifier_subscriptions_destroy(= struct mm_struct *mm) =20 #define mmu_notifier_range_update_to_read_only(r) false =20 -#define clear_flush_young_ptes_notify clear_flush_young_ptes -#define pmdp_clear_flush_young_notify pmdp_clear_flush_young -#define ptep_clear_young_notify ptep_test_and_clear_young -#define pmdp_clear_young_notify pmdp_test_and_clear_young - static inline void mmu_notifier_synchronize(void) { } diff --git a/mm/internal.h b/mm/internal.h index 0d5208101762..05eb0303f277 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -11,6 +11,7 @@ #include #include #include +#include #include #include #include @@ -1796,4 +1797,55 @@ static inline int io_remap_pfn_range_complete(struct= vm_area_struct *vma, return remap_pfn_range_complete(vma, addr, pfn, size, prot); } =20 +#ifdef CONFIG_MMU_NOTIFIER +static inline int clear_flush_young_ptes_notify(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, unsigned int nr) +{ + int young; + + young =3D clear_flush_young_ptes(vma, addr, ptep, nr); + young |=3D mmu_notifier_clear_flush_young(vma->vm_mm, addr, + addr + nr * PAGE_SIZE); + return young; +} + +static inline int pmdp_clear_flush_young_notify(struct vm_area_struct *vma, + unsigned long addr, pmd_t *pmdp) +{ + int young; + + young =3D pmdp_clear_flush_young(vma, addr, pmdp); + young |=3D mmu_notifier_clear_flush_young(vma->vm_mm, addr, addr + PMD_SI= ZE); + return young; +} + +static inline int ptep_clear_young_notify(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) +{ + int young; + + young =3D ptep_test_and_clear_young(vma, addr, ptep); + young |=3D mmu_notifier_clear_young(vma->vm_mm, addr, addr + PAGE_SIZE); + return young; +} + +static inline int pmdp_clear_young_notify(struct vm_area_struct *vma, + unsigned long addr, pmd_t *pmdp) +{ + int young; + + young =3D pmdp_test_and_clear_young(vma, addr, pmdp); + young |=3D mmu_notifier_clear_young(vma->vm_mm, addr, addr + PMD_SIZE); + return young; +} + +#else /* CONFIG_MMU_NOTIFIER */ + +#define clear_flush_young_ptes_notify clear_flush_young_ptes +#define pmdp_clear_flush_young_notify pmdp_clear_flush_young +#define ptep_clear_young_notify ptep_test_and_clear_young +#define pmdp_clear_young_notify pmdp_test_and_clear_young + +#endif /* CONFIG_MMU_NOTIFIER */ + #endif /* __MM_INTERNAL_H */ --=20 2.47.3 From nobody Sun Apr 5 19:41:47 2026 Received: from out30-101.freemail.mail.aliyun.com (out30-101.freemail.mail.aliyun.com [115.124.30.101]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A460D370D54 for ; Fri, 6 Mar 2026 06:44:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.101 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772779452; cv=none; b=U/Ug3adV3jTQ4O8RWJNk4qzCzvfvg/1ozSVhe1TKDjpIf27ZmbE5+qjZJ6sJPQ+w1KC1OnlpQcqEwn2SODXvRXrcKmutJbVucBViKKCO96bDDx/+VqZlhasSoy5fmyl9CIgXBF1sn8XtducE6QDXYlhy4U3GBq1h3k47ajnqOXk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772779452; c=relaxed/simple; bh=JOkOROW3b7eb+nvDQ9JmPl0dcUzdGcNd/2XfkjifZ5E=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gZlYFhQFxdVmr+aIQ5CAW65LeT0VfAVfZINv10VDgEwfY6eaV1L6+D8woQAoEDvhFNwVAtgW3WHnMzU+QirYQ0FitA0qRCW8/AaWeG6uG71N+m61caBHTmSqSZ88xGuYWmrtxLaQCZksncZQ4Vbi2y/5f7aN/TuYJCCPbHDxqj0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=h0/RbeLG; arc=none smtp.client-ip=115.124.30.101 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="h0/RbeLG" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1772779442; h=From:To:Subject:Date:Message-ID:MIME-Version; bh=8kKRK+KpINJy/hP8loLkF6pCSehJDKosql8DLGWhzAI=; b=h0/RbeLGA+NOJ4LU0cgtXVw8rtDgZCZq9kJdACYDxhIgd6ccW305DnOmzeDbU2Fxl7Slk7WWO9278Jqkj8sfBprDLo+wzxpW544FtL2XFQi7VWknvRhpYd32ifJ4j8uhoS+ZlODER7800DfIwXX+RwG/Jw9lNcpTzIoIvNSSrAE= Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0X-MKhDB_1772779440 cluster:ay36) by smtp.aliyun-inc.com; Fri, 06 Mar 2026 14:44:00 +0800 From: Baolin Wang To: akpm@linux-foundation.org, david@kernel.org Cc: catalin.marinas@arm.com, will@kernel.org, lorenzo.stoakes@oracle.com, ryan.roberts@arm.com, Liam.Howlett@oracle.com, vbabka@suse.cz, rppt@kernel.org, surenb@google.com, mhocko@suse.com, riel@surriel.com, harry.yoo@oracle.com, jannh@google.com, willy@infradead.org, baohua@kernel.org, dev.jain@arm.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, hannes@cmpxchg.org, zhengqi.arch@bytedance.com, shakeel.butt@linux.dev, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 2/6] mm: rename ptep/pmdp_clear_young_notify() to ptep/pmdp_test_and_clear_young_notify() Date: Fri, 6 Mar 2026 14:43:38 +0800 Message-ID: X-Mailer: git-send-email 2.47.3 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Rename ptep/pmdp_clear_young_notify() to ptep/pmdp_test_and_clear_young_not= ify() to make the function names consistent. Acked-by: David Hildenbrand (Arm) Suggested-by: David Hildenbrand (Arm) Signed-off-by: Baolin Wang --- mm/internal.h | 8 ++++---- mm/vmscan.c | 8 ++++---- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 05eb0303f277..f45f97df0d28 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1819,7 +1819,7 @@ static inline int pmdp_clear_flush_young_notify(struc= t vm_area_struct *vma, return young; } =20 -static inline int ptep_clear_young_notify(struct vm_area_struct *vma, +static inline int ptep_test_and_clear_young_notify(struct vm_area_struct *= vma, unsigned long addr, pte_t *ptep) { int young; @@ -1829,7 +1829,7 @@ static inline int ptep_clear_young_notify(struct vm_a= rea_struct *vma, return young; } =20 -static inline int pmdp_clear_young_notify(struct vm_area_struct *vma, +static inline int pmdp_test_and_clear_young_notify(struct vm_area_struct *= vma, unsigned long addr, pmd_t *pmdp) { int young; @@ -1843,8 +1843,8 @@ static inline int pmdp_clear_young_notify(struct vm_a= rea_struct *vma, =20 #define clear_flush_young_ptes_notify clear_flush_young_ptes #define pmdp_clear_flush_young_notify pmdp_clear_flush_young -#define ptep_clear_young_notify ptep_test_and_clear_young -#define pmdp_clear_young_notify pmdp_test_and_clear_young +#define ptep_test_and_clear_young_notify ptep_test_and_clear_young +#define pmdp_test_and_clear_young_notify pmdp_test_and_clear_young =20 #endif /* CONFIG_MMU_NOTIFIER */ =20 diff --git a/mm/vmscan.c b/mm/vmscan.c index 52adb37d1b01..e3425b4db755 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3504,7 +3504,7 @@ static bool walk_pte_range(pmd_t *pmd, unsigned long = start, unsigned long end, if (!folio) continue; =20 - if (!ptep_clear_young_notify(args->vma, addr, pte + i)) + if (!ptep_test_and_clear_young_notify(args->vma, addr, pte + i)) continue; =20 if (last !=3D folio) { @@ -3595,7 +3595,7 @@ static void walk_pmd_range_locked(pud_t *pud, unsigne= d long addr, struct vm_area if (!folio) goto next; =20 - if (!pmdp_clear_young_notify(vma, addr, pmd + i)) + if (!pmdp_test_and_clear_young_notify(vma, addr, pmd + i)) goto next; =20 if (last !=3D folio) { @@ -4185,7 +4185,7 @@ bool lru_gen_look_around(struct page_vma_mapped_walk = *pvmw) lockdep_assert_held(pvmw->ptl); VM_WARN_ON_ONCE_FOLIO(folio_test_lru(folio), folio); =20 - if (!ptep_clear_young_notify(vma, addr, pte)) + if (!ptep_test_and_clear_young_notify(vma, addr, pte)) return false; =20 if (spin_is_contended(pvmw->ptl)) @@ -4237,7 +4237,7 @@ bool lru_gen_look_around(struct page_vma_mapped_walk = *pvmw) if (!folio) continue; =20 - if (!ptep_clear_young_notify(vma, addr, pte + i)) + if (!ptep_test_and_clear_young_notify(vma, addr, pte + i)) continue; =20 if (last !=3D folio) { --=20 2.47.3 From nobody Sun Apr 5 19:41:47 2026 Received: from out30-97.freemail.mail.aliyun.com (out30-97.freemail.mail.aliyun.com [115.124.30.97]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1FEF0371CE6 for ; Fri, 6 Mar 2026 06:44:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.97 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772779453; cv=none; b=r4J0R17ZGOMnfYvh7EDwf3DprxSb3x7C0TesumyZRxN8AFHvvWnAVSWqRo7qtsWfGVFIJPvvMSu3nQxAf2wvv8fXIzPC+VcakA4r6zego0y40HtPX5TtkToxtvQf5JMIEc0lVVjcomSt2qWZHcF5vIn0IMItoQF5CISE8gV9P7A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772779453; c=relaxed/simple; bh=nwRBWLFjAOscqgJ3K43q8KfCEe4AX1vIU8v1hAiiSYo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Qv3uvQ15ZEYwzqXUyRzzD4KgGe6P9MsgU/PdEXnOM+FM2ToelXUvFsoXsTjPqBQvndCHjvvaE4nFksME+uq6A+sG6LldvoHHA/TntJz2SBWIVrtAN0H4fP1JjOcEjEHwlwg/7RM26QlCcqilDHq8h4RQcFt6uDHSkGCssz9+c6w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=tj1/oSdg; arc=none smtp.client-ip=115.124.30.97 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="tj1/oSdg" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1772779444; h=From:To:Subject:Date:Message-ID:MIME-Version; bh=HTT2hSwhIX0sy9uprNVi4qKHugU4cvVQ3G9ad8x784g=; b=tj1/oSdgunBO2uFijstY4+XQRmGVchCyYNbCy/0bphQ8LCt+17ihlQWaVnBs1pjiuCYN8rzw4V2wSvwK6D0nxPZ/O9Dji58mEf4Hnbsg4pkmHotXJdUmMznr6VnYiiATjWHgsOoU6Yx0z/jx2vvniXCjU+yvXyYGFN9pzjER/8o= Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0X-MKhEA_1772779441 cluster:ay36) by smtp.aliyun-inc.com; Fri, 06 Mar 2026 14:44:02 +0800 From: Baolin Wang To: akpm@linux-foundation.org, david@kernel.org Cc: catalin.marinas@arm.com, will@kernel.org, lorenzo.stoakes@oracle.com, ryan.roberts@arm.com, Liam.Howlett@oracle.com, vbabka@suse.cz, rppt@kernel.org, surenb@google.com, mhocko@suse.com, riel@surriel.com, harry.yoo@oracle.com, jannh@google.com, willy@infradead.org, baohua@kernel.org, dev.jain@arm.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, hannes@cmpxchg.org, zhengqi.arch@bytedance.com, shakeel.butt@linux.dev, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 3/6] mm: rmap: add a ZONE_DEVICE folio warning in folio_referenced() Date: Fri, 6 Mar 2026 14:43:39 +0800 Message-ID: <64d6fb2a33f7101e1d4aca2c9052e0758b76d492.1772778858.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The folio_referenced() is used to test whether a folio was referenced during reclaim. Moreover, ZONE_DEVICE folios are controlled by their device driver, have a lifetime tied to that driver, and are never placed on the LRU list. That means we should never try to reclaim ZONE_DEVICE folios, so add a warn= ing to catch this unexpected behavior in folio_referenced() to avoid confusion, as discussed in the previous thread[1]. [1] https://lore.kernel.org/all/16fb7985-ec0f-4b56-91e7-404c5114f899@kernel= .org/ Reviewed-by: Alistair Popple Acked-by: David Hildenbrand (Arm) Signed-off-by: Baolin Wang --- mm/rmap.c | 1 + 1 file changed, 1 insertion(+) diff --git a/mm/rmap.c b/mm/rmap.c index 603186ff4ba5..2d94b3ba52da 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1065,6 +1065,7 @@ int folio_referenced(struct folio *folio, int is_lock= ed, .invalid_vma =3D invalid_folio_referenced_vma, }; =20 + VM_WARN_ON_ONCE_FOLIO(folio_is_zone_device(folio), folio); *vm_flags =3D 0; if (!pra.mapcount) return 0; --=20 2.47.3 From nobody Sun Apr 5 19:41:47 2026 Received: from out30-132.freemail.mail.aliyun.com (out30-132.freemail.mail.aliyun.com [115.124.30.132]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A5D3B371CF0 for ; Fri, 6 Mar 2026 06:44:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.132 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772779454; cv=none; b=RXHpeVgsK5JG2H9kUkgz9H20HL7/YwKd8KrXbW6WlYpsOqlFhe7Gu9vpeOviSwES/WrtJrE+lzdVbiRLfUgTOp0qhjOllRlEBf4szfMxoBeqVwBmnF8+9jgHbfPDu46is1tyk0vEhlr5YZdvPNd5DYZs8k6mH6JUq64L/UbigR0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772779454; c=relaxed/simple; bh=P8nt/pROwCqysN2teJXrpTYWCmaXDcKdjH+7H+y7LzQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ToiPZLdSaR7tsshwawLI1tbpS8KRR2CnEbUvWpKQCnVLqs0sXqztj3TXvm1/MOzDbYkPw/yTWsmOKLpKnWBvRSRcB36rm4RizB1SBdXCF48Sjya5Jl2Lh0ja0f0IrPTyDlmloaUn5fPysii7NRik9HduB08qRTiFyEWOWkUU2uI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=v49bVuO1; arc=none smtp.client-ip=115.124.30.132 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="v49bVuO1" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1772779445; h=From:To:Subject:Date:Message-ID:MIME-Version; bh=ySLYC5/cF8hmf06ne/L2kgOPFMnQM58WLWXLfygl3cs=; b=v49bVuO1fkFqiM6ScJNP13gRf2ev7Ccb9klhcYlXe0tYjbiXo3RqMpSIp1L3xn+Ohp/GyR1t+f1NZ8pHV44RD4p7FzMfXR0p1kkRg/Y3jmlmkKa8FCL4XVuCH/H2LsmmTbHJtn7EaZLPdWl/EOsmol52glplSnsiVIyIueoBzYQ= Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0X-MLoqO_1772779443 cluster:ay36) by smtp.aliyun-inc.com; Fri, 06 Mar 2026 14:44:04 +0800 From: Baolin Wang To: akpm@linux-foundation.org, david@kernel.org Cc: catalin.marinas@arm.com, will@kernel.org, lorenzo.stoakes@oracle.com, ryan.roberts@arm.com, Liam.Howlett@oracle.com, vbabka@suse.cz, rppt@kernel.org, surenb@google.com, mhocko@suse.com, riel@surriel.com, harry.yoo@oracle.com, jannh@google.com, willy@infradead.org, baohua@kernel.org, dev.jain@arm.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, hannes@cmpxchg.org, zhengqi.arch@bytedance.com, shakeel.butt@linux.dev, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 4/6] mm: add a batched helper to clear the young flag for large folios Date: Fri, 6 Mar 2026 14:43:40 +0800 Message-ID: <23ec671bfcc06cd24ee0fbff8e329402742274a0.1772778858.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently, MGLRU will call ptep_test_and_clear_young_notify() to check and clear the young flag for each PTE sequentially, which is inefficient for large folios reclamation. Moreover, on Arm64 architecture, which supports contiguous PTEs, the Arm64- specific ptep_test_and_clear_young() already implements an optimization to clear the young flags for PTEs within a contiguous range. However, this is = not sufficient. Similar to the Arm64 specific clear_flush_young_ptes(), we can extend this to perform batched operations for the entire large folio (which might exceed the contiguous range: CONT_PTE_SIZE). Thus, we can introduce a new batched helper: test_and_clear_young_ptes() and its wrapper test_and_clear_young_ptes_notify() which are consistent with the existing functions, to perform batched checking of the young flags for large folios, which can help improve performance during large folio reclamation w= hen MGLRU is enabled. And it will be overridden by the architecture that implem= ents a more efficient batch operation in the following patches. Signed-off-by: Baolin Wang --- include/linux/pgtable.h | 37 +++++++++++++++++++++++++++++++++++++ mm/internal.h | 16 +++++++++++----- 2 files changed, 48 insertions(+), 5 deletions(-) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index d2767a4c027b..17d961c612fc 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -1103,6 +1103,43 @@ static inline int clear_flush_young_ptes(struct vm_a= rea_struct *vma, } #endif =20 +#ifndef test_and_clear_young_ptes +/** + * test_and_clear_young_ptes - Mark PTEs that map consecutive pages of the= same + * folio as old + * @vma: The virtual memory area the pages are mapped into. + * @addr: Address the first page is mapped at. + * @ptep: Page table pointer for the first entry. + * @nr: Number of entries to clear access bit. + * + * May be overridden by the architecture; otherwise, implemented as a simp= le + * loop over ptep_test_and_clear_young(). + * + * Note that PTE bits in the PTE range besides the PFN can differ. For exa= mple, + * some PTEs might be write-protected. + * + * Context: The caller holds the page table lock. The PTEs map consecutive + * pages that belong to the same folio. The PTEs are all in the same PMD. + * + * Returns: whether any PTE was young. + */ +static inline int test_and_clear_young_ptes(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, unsigned int nr) +{ + int young =3D 0; + + for (;;) { + young |=3D ptep_test_and_clear_young(vma, addr, ptep); + if (--nr =3D=3D 0) + break; + ptep++; + addr +=3D PAGE_SIZE; + } + + return young; +} +#endif + /* * On some architectures hardware does not set page access bit when access= ing * memory page, it is responsibility of software setting this bit. It brin= gs diff --git a/mm/internal.h b/mm/internal.h index f45f97df0d28..8cdd5d8e43fb 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1819,13 +1819,13 @@ static inline int pmdp_clear_flush_young_notify(str= uct vm_area_struct *vma, return young; } =20 -static inline int ptep_test_and_clear_young_notify(struct vm_area_struct *= vma, - unsigned long addr, pte_t *ptep) +static inline int test_and_clear_young_ptes_notify(struct vm_area_struct *= vma, + unsigned long addr, pte_t *ptep, unsigned int nr) { int young; =20 - young =3D ptep_test_and_clear_young(vma, addr, ptep); - young |=3D mmu_notifier_clear_young(vma->vm_mm, addr, addr + PAGE_SIZE); + young =3D test_and_clear_young_ptes(vma, addr, ptep, nr); + young |=3D mmu_notifier_clear_young(vma->vm_mm, addr, addr + nr * PAGE_SI= ZE); return young; } =20 @@ -1843,9 +1843,15 @@ static inline int pmdp_test_and_clear_young_notify(s= truct vm_area_struct *vma, =20 #define clear_flush_young_ptes_notify clear_flush_young_ptes #define pmdp_clear_flush_young_notify pmdp_clear_flush_young -#define ptep_test_and_clear_young_notify ptep_test_and_clear_young +#define test_and_clear_young_ptes_notify test_and_clear_young_ptes #define pmdp_test_and_clear_young_notify pmdp_test_and_clear_young =20 #endif /* CONFIG_MMU_NOTIFIER */ =20 +static inline int ptep_test_and_clear_young_notify(struct vm_area_struct *= vma, + unsigned long addr, pte_t *ptep) +{ + return test_and_clear_young_ptes_notify(vma, addr, ptep, 1); +} + #endif /* __MM_INTERNAL_H */ --=20 2.47.3 From nobody Sun Apr 5 19:41:47 2026 Received: from out30-113.freemail.mail.aliyun.com (out30-113.freemail.mail.aliyun.com [115.124.30.113]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3B58D20C461 for ; Fri, 6 Mar 2026 06:44:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.113 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772779453; cv=none; b=mG4+sG38gljjqi2WStL0dgIE1iab4NL2Ov/ZQfGhq/8iJjFLgOUbPeXMtBgP66mGQU6rU6NMc9wXxxZmUetvws2WzXOX10HrjOT4XL1zEYbntduP3/v+iGJbedaIs/qcSoDYsEqKO6wgzKmO8iTfdECB1CobrCOjSLB6LuXoAyw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772779453; c=relaxed/simple; bh=uloYDqw4LdKrh2fGDWpZ5z7QSU2IPc3KjNw/zraprhE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=D/tdtGZ8nIhUK9Ru2MY0yud6LNjg6P6S1aJxkZAJd17OpSE3rOXK3WWptZ6M1XtW9TethbqUWVTAz6lJ3LV/wrljZNtUXLIWK0nPgwC9ooBpzJQ2qolctJyiRpCuVrCJ4FzcDX9cfgdBc5urFxxEDLOlsVklTKnNhA/aZRuCXqs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=d8QiygG4; arc=none smtp.client-ip=115.124.30.113 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="d8QiygG4" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1772779447; h=From:To:Subject:Date:Message-ID:MIME-Version:Content-Type; bh=t4BjQGeH+bBO/rkeXAJzuycBDkicdFTW/HZEtAQ/wv8=; b=d8QiygG4RmkGU1MpeceZQXNhYOW22M0viUDXkhKPS7QKixh6albKxGUbJ0E3NhnkKgG+2rjOmCWd2LLzHd5xmJJEbDDv3VTshDfHzZg3J/ii0UDtFzPRbQzW/4U2JGkEaUXZHe8YaTWBFRFcc8uCtP0Yk8Yp/CmihdDXdW83CMA= Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0X-MLorm_1772779445 cluster:ay36) by smtp.aliyun-inc.com; Fri, 06 Mar 2026 14:44:05 +0800 From: Baolin Wang To: akpm@linux-foundation.org, david@kernel.org Cc: catalin.marinas@arm.com, will@kernel.org, lorenzo.stoakes@oracle.com, ryan.roberts@arm.com, Liam.Howlett@oracle.com, vbabka@suse.cz, rppt@kernel.org, surenb@google.com, mhocko@suse.com, riel@surriel.com, harry.yoo@oracle.com, jannh@google.com, willy@infradead.org, baohua@kernel.org, dev.jain@arm.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, hannes@cmpxchg.org, zhengqi.arch@bytedance.com, shakeel.butt@linux.dev, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 5/6] mm: support batched checking of the young flag for MGLRU Date: Fri, 6 Mar 2026 14:43:41 +0800 Message-ID: <378f4acf7d07410aa7c2e4b49d56bb165918eb34.1772778858.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Use the batched helper test_and_clear_young_ptes_notify() to check and clear the young flag to improve the performance during large folio reclamation wh= en MGLRU is enabled. Meanwhile, we can also support batched checking the young and dirty flag when MGLRU walks the mm's pagetable to update the folios' generation counter. Since MGLRU also checks the PTE dirty bit, use folio_pte_batch_fla= gs() with FPB_MERGE_YOUNG_DIRTY set to detect batches of PTEs for a large folio. Then we can remove the ptep_test_and_clear_young_notify() since it has no users now. Note that we also update the 'young' counter and 'mm_stats[MM_LEAF_YOUNG]' counter with the batched count in the lru_gen_look_around() and walk_pte_ra= nge(). However, the batched operations may inflate these two counters, because in a large folio not all PTEs may have been accessed. (Additionally, tracking how many PTEs have been accessed within a large folio is not very meaningfu= l, since the mm core actually tracks access/dirty on a per-folio basis, not per page). The impact analysis is as follows: 1. The 'mm_stats[MM_LEAF_YOUNG]' counter has no functional impact and is mainly for debugging. 2. The 'young' counter is used to decide whether to place the current PMD entry into the bloom filters by suitable_to_scan() (so that next time we can check whether it has been accessed again), which may set the hash bit in the bloom filters for a PMD entry that hasn=E2=80=99t seen much access. = However, bloom filters inherently allow some error, so this effect appears negligibl= e. Reviewed-by: Rik van Riel Signed-off-by: Baolin Wang Acked-by: David Hildenbrand (Arm) --- include/linux/mmzone.h | 5 +++-- mm/internal.h | 6 ------ mm/rmap.c | 28 +++++++++++++-------------- mm/vmscan.c | 43 +++++++++++++++++++++++++++++++----------- 4 files changed, 49 insertions(+), 33 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index b53b95abe287..7bd0134c241c 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -684,7 +684,7 @@ struct lru_gen_memcg { =20 void lru_gen_init_pgdat(struct pglist_data *pgdat); void lru_gen_init_lruvec(struct lruvec *lruvec); -bool lru_gen_look_around(struct page_vma_mapped_walk *pvmw); +bool lru_gen_look_around(struct page_vma_mapped_walk *pvmw, unsigned int n= r); =20 void lru_gen_init_memcg(struct mem_cgroup *memcg); void lru_gen_exit_memcg(struct mem_cgroup *memcg); @@ -706,7 +706,8 @@ static inline void lru_gen_init_lruvec(struct lruvec *l= ruvec) { } =20 -static inline bool lru_gen_look_around(struct page_vma_mapped_walk *pvmw) +static inline bool lru_gen_look_around(struct page_vma_mapped_walk *pvmw, + unsigned int nr) { return false; } diff --git a/mm/internal.h b/mm/internal.h index 8cdd5d8e43fb..95b583e7e4f7 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1848,10 +1848,4 @@ static inline int pmdp_test_and_clear_young_notify(s= truct vm_area_struct *vma, =20 #endif /* CONFIG_MMU_NOTIFIER */ =20 -static inline int ptep_test_and_clear_young_notify(struct vm_area_struct *= vma, - unsigned long addr, pte_t *ptep) -{ - return test_and_clear_young_ptes_notify(vma, addr, ptep, 1); -} - #endif /* __MM_INTERNAL_H */ diff --git a/mm/rmap.c b/mm/rmap.c index 2d94b3ba52da..6398d7eef393 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -958,25 +958,20 @@ static bool folio_referenced_one(struct folio *folio, return false; } =20 + if (pvmw.pte && folio_test_large(folio)) { + const unsigned long end_addr =3D pmd_addr_end(address, vma->vm_end); + const unsigned int max_nr =3D (end_addr - address) >> PAGE_SHIFT; + pte_t pteval =3D ptep_get(pvmw.pte); + + nr =3D folio_pte_batch(folio, pvmw.pte, pteval, max_nr); + } + if (lru_gen_enabled() && pvmw.pte) { - if (lru_gen_look_around(&pvmw)) + if (lru_gen_look_around(&pvmw, nr)) referenced++; } else if (pvmw.pte) { - if (folio_test_large(folio)) { - unsigned long end_addr =3D pmd_addr_end(address, vma->vm_end); - unsigned int max_nr =3D (end_addr - address) >> PAGE_SHIFT; - pte_t pteval =3D ptep_get(pvmw.pte); - - nr =3D folio_pte_batch(folio, pvmw.pte, - pteval, max_nr); - } - - ptes +=3D nr; if (clear_flush_young_ptes_notify(vma, address, pvmw.pte, nr)) referenced++; - /* Skip the batched PTEs */ - pvmw.pte +=3D nr - 1; - pvmw.address +=3D (nr - 1) * PAGE_SIZE; } else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) { if (pmdp_clear_flush_young_notify(vma, address, pvmw.pmd)) @@ -986,6 +981,7 @@ static bool folio_referenced_one(struct folio *folio, WARN_ON_ONCE(1); } =20 + ptes +=3D nr; pra->mapcount -=3D nr; /* * If we are sure that we batched the entire folio, @@ -995,6 +991,10 @@ static bool folio_referenced_one(struct folio *folio, page_vma_mapped_walk_done(&pvmw); break; } + + /* Skip the batched PTEs */ + pvmw.pte +=3D nr - 1; + pvmw.address +=3D (nr - 1) * PAGE_SIZE; } =20 if (referenced) diff --git a/mm/vmscan.c b/mm/vmscan.c index e3425b4db755..33287ba4a500 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3470,6 +3470,7 @@ static bool walk_pte_range(pmd_t *pmd, unsigned long = start, unsigned long end, struct pglist_data *pgdat =3D lruvec_pgdat(walk->lruvec); DEFINE_MAX_SEQ(walk->lruvec); int gen =3D lru_gen_from_seq(max_seq); + unsigned int nr; pmd_t pmdval; =20 pte =3D pte_offset_map_rw_nolock(args->mm, pmd, start & PMD_MASK, &pmdval= , &ptl); @@ -3488,11 +3489,13 @@ static bool walk_pte_range(pmd_t *pmd, unsigned lon= g start, unsigned long end, =20 lazy_mmu_mode_enable(); restart: - for (i =3D pte_index(start), addr =3D start; addr !=3D end; i++, addr += =3D PAGE_SIZE) { + for (i =3D pte_index(start), addr =3D start; addr !=3D end; i +=3D nr, ad= dr +=3D nr * PAGE_SIZE) { unsigned long pfn; struct folio *folio; - pte_t ptent =3D ptep_get(pte + i); + pte_t *cur_pte =3D pte + i; + pte_t ptent =3D ptep_get(cur_pte); =20 + nr =3D 1; total++; walk->mm_stats[MM_LEAF_TOTAL]++; =20 @@ -3504,7 +3507,16 @@ static bool walk_pte_range(pmd_t *pmd, unsigned long= start, unsigned long end, if (!folio) continue; =20 - if (!ptep_test_and_clear_young_notify(args->vma, addr, pte + i)) + if (folio_test_large(folio)) { + const unsigned int max_nr =3D (end - addr) >> PAGE_SHIFT; + + nr =3D folio_pte_batch_flags(folio, NULL, cur_pte, &ptent, + max_nr, FPB_MERGE_YOUNG_DIRTY); + total +=3D nr - 1; + walk->mm_stats[MM_LEAF_TOTAL] +=3D nr - 1; + } + + if (!test_and_clear_young_ptes_notify(args->vma, addr, cur_pte, nr)) continue; =20 if (last !=3D folio) { @@ -3517,8 +3529,8 @@ static bool walk_pte_range(pmd_t *pmd, unsigned long = start, unsigned long end, if (pte_dirty(ptent)) dirty =3D true; =20 - young++; - walk->mm_stats[MM_LEAF_YOUNG]++; + young +=3D nr; + walk->mm_stats[MM_LEAF_YOUNG] +=3D nr; } =20 walk_update_folio(walk, last, gen, dirty); @@ -4162,7 +4174,7 @@ static void lru_gen_age_node(struct pglist_data *pgda= t, struct scan_control *sc) * the PTE table to the Bloom filter. This forms a feedback loop between t= he * eviction and the aging. */ -bool lru_gen_look_around(struct page_vma_mapped_walk *pvmw) +bool lru_gen_look_around(struct page_vma_mapped_walk *pvmw, unsigned int n= r) { int i; bool dirty; @@ -4185,7 +4197,7 @@ bool lru_gen_look_around(struct page_vma_mapped_walk = *pvmw) lockdep_assert_held(pvmw->ptl); VM_WARN_ON_ONCE_FOLIO(folio_test_lru(folio), folio); =20 - if (!ptep_test_and_clear_young_notify(vma, addr, pte)) + if (!test_and_clear_young_ptes_notify(vma, addr, pte, nr)) return false; =20 if (spin_is_contended(pvmw->ptl)) @@ -4225,10 +4237,12 @@ bool lru_gen_look_around(struct page_vma_mapped_wal= k *pvmw) =20 pte -=3D (addr - start) / PAGE_SIZE; =20 - for (i =3D 0, addr =3D start; addr !=3D end; i++, addr +=3D PAGE_SIZE) { + for (i =3D 0, addr =3D start; addr !=3D end; + i +=3D nr, pte +=3D nr, addr +=3D nr * PAGE_SIZE) { unsigned long pfn; - pte_t ptent =3D ptep_get(pte + i); + pte_t ptent =3D ptep_get(pte); =20 + nr =3D 1; pfn =3D get_pte_pfn(ptent, vma, addr, pgdat); if (pfn =3D=3D -1) continue; @@ -4237,7 +4251,14 @@ bool lru_gen_look_around(struct page_vma_mapped_walk= *pvmw) if (!folio) continue; =20 - if (!ptep_test_and_clear_young_notify(vma, addr, pte + i)) + if (folio_test_large(folio)) { + const unsigned int max_nr =3D (end - addr) >> PAGE_SHIFT; + + nr =3D folio_pte_batch_flags(folio, NULL, pte, &ptent, + max_nr, FPB_MERGE_YOUNG_DIRTY); + } + + if (!test_and_clear_young_ptes_notify(vma, addr, pte, nr)) continue; =20 if (last !=3D folio) { @@ -4250,7 +4271,7 @@ bool lru_gen_look_around(struct page_vma_mapped_walk = *pvmw) if (pte_dirty(ptent)) dirty =3D true; =20 - young++; + young +=3D nr; } =20 walk_update_folio(walk, last, gen, dirty); --=20 2.47.3 From nobody Sun Apr 5 19:41:47 2026 Received: from out30-130.freemail.mail.aliyun.com (out30-130.freemail.mail.aliyun.com [115.124.30.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E6209371D02 for ; Fri, 6 Mar 2026 06:44:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.130 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772779458; cv=none; b=pw6Omj+PbBaPuhZEKnbLvB3P39gFjhcvfbQV1I45PaG85NhmYZIvwN9qvbqnPlZQYijjOj5m6EhzeEE8FAHKG8Hr+YxoyInbs3jpgFtmEYGyTHkHJptUz4BvpESdxzbhSAlERLNoGymz2UBQONcB/I+/9xj/d+6/EY4rl0vPx68= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772779458; c=relaxed/simple; bh=xQSoH8Ux9Ilv+cYOKrx/tHuqnVK+e+Pu2dYDUnQwH7c=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hZfBV82EXeJpVFBvx+YStVW9ZCqDYXBdBc1vcaxlpQ9/ftNhgY8t/VYLKYP1gQPT2wd8ZdlzPleFIKLC/2TGLI2cTRxvKq3r5VObC2o3L/IXzcd9utoLn9tPpmQ29gvicfv2ZErTg1SOuahmaqjY8LxqL30Z0hHJcqgb3l8M8Uk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=ldKEC1ku; arc=none smtp.client-ip=115.124.30.130 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="ldKEC1ku" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1772779449; h=From:To:Subject:Date:Message-ID:MIME-Version; bh=Z89VjCPOsBTAMzdOkBmmkyGLdD2D9prjOEPgmVbdWLg=; b=ldKEC1kuEPefnGMQ0yMTEQcsEED3mzCE7wZNbXSwW1u68p0NJHbxPyti4ErvsJCWWhefAlAODcEbZCNX4j/CFim370d0SPCIFr9n0IUIyL1FOabEkiX00obEKXDTp0QE3TC1+qtXfQbd+KQ1FthHOj4zDeAKmZK5wiHG4IoC0eM= Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0X-MLT6U_1772779446 cluster:ay36) by smtp.aliyun-inc.com; Fri, 06 Mar 2026 14:44:07 +0800 From: Baolin Wang To: akpm@linux-foundation.org, david@kernel.org Cc: catalin.marinas@arm.com, will@kernel.org, lorenzo.stoakes@oracle.com, ryan.roberts@arm.com, Liam.Howlett@oracle.com, vbabka@suse.cz, rppt@kernel.org, surenb@google.com, mhocko@suse.com, riel@surriel.com, harry.yoo@oracle.com, jannh@google.com, willy@infradead.org, baohua@kernel.org, dev.jain@arm.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, hannes@cmpxchg.org, zhengqi.arch@bytedance.com, shakeel.butt@linux.dev, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 6/6] arm64: mm: implement the architecture-specific test_and_clear_young_ptes() Date: Fri, 6 Mar 2026 14:43:42 +0800 Message-ID: <7f891d42a720cc2e57862f3b79e4f774404f313c.1772778858.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Implement the Arm64 architecture-specific test_and_clear_young_ptes() to en= able batched checking of young flags, improving performance during large folio reclamation when MGLRU is enabled. While we're at it, simplify ptep_test_and_clear_young() by calling test_and_clear_young_ptes(). Since callers guarantee that PTEs are present before calling these functions, we can use pte_cont() to check the CONT_PTE flag instead of pte_valid_cont(). Performance testing: Enable MGLRU, then allocate 10G clean file-backed folios by mmap() in a mem= ory cgroup, and try to reclaim 8G file-backed folios via the memory.reclaim int= erface. I can observe 60%+ performance improvement on my Arm64 32-core server (and = about 15% improvement on my X86 machine). W/o patchset: real 0m0.470s user 0m0.000s sys 0m0.470s W/ patchset: real 0m0.180s user 0m0.001s sys 0m0.179s Reviewed-by: Rik van Riel Signed-off-by: Baolin Wang Reviewed-by: David Hildenbrand (Arm) --- arch/arm64/include/asm/pgtable.h | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgta= ble.h index aa4b13da6371..ab451d20e4c5 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -1812,16 +1812,22 @@ static inline pte_t ptep_get_and_clear(struct mm_st= ruct *mm, return __ptep_get_and_clear(mm, addr, ptep); } =20 +#define test_and_clear_young_ptes test_and_clear_young_ptes +static inline int test_and_clear_young_ptes(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, + unsigned int nr) +{ + if (likely(nr =3D=3D 1 && !pte_cont(__ptep_get(ptep)))) + return __ptep_test_and_clear_young(vma, addr, ptep); + + return contpte_test_and_clear_young_ptes(vma, addr, ptep, nr); +} + #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { - pte_t orig_pte =3D __ptep_get(ptep); - - if (likely(!pte_valid_cont(orig_pte))) - return __ptep_test_and_clear_young(vma, addr, ptep); - - return contpte_test_and_clear_young_ptes(vma, addr, ptep, 1); + return test_and_clear_young_ptes(vma, addr, ptep, 1); } =20 #define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH --=20 2.47.3