CVE-2024-57884 patch review feedback (https://lore.kernel.org/linux-cve-announce/2025011510-CVE-2024-57884-4cf8@gregkh/#R)

liuqiqi@kylinos.cn posted 1 patch 4 months, 2 weeks ago
CVE-2024-57884 patch review feedback (https://lore.kernel.org/linux-cve-announce/2025011510-CVE-2024-57884-4cf8@gregkh/#R)
Posted by liuqiqi@kylinos.cn 4 months, 2 weeks ago
CVE-2024-57884  patch fixes  mm: vmscan: account for free pages to prevent infinite Loop in throttle_direct_reclaim() modify as follows
@@ -342,7 +342,14 @@ unsigned long zone_reclaimable_pages(struct zone *zone)
 	if (get_nr_swap_pages() > 0)
 		nr += zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_ANON) +
 			zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_ANON);
-
+	/*
+	 * If there are no reclaimable file-backed or anonymous pages,
+	 * ensure zones with sufficient free pages are not skipped.
+	 * This prevents zones like DMA32 from being ignored in reclaim
+	 * scenarios where they can still help alleviate memory pressure.
+	 */
+	if (nr == 0)
+		nr = zone_page_state_snapshot(zone, NR_FREE_PAGES);
 	return nr;
 }
However, should_reclaim_retry() function calls zone_reclaimable_pages to count free pages. When nr is 0, it double-counts NR_FREE_PAGES. This seems to cause inaccurate page statistics, right?
static inline bool
should_reclaim_retry(gfp_t gfp_mask, unsigned order,
		     struct alloc_context *ac, int alloc_flags,
		     bool did_some_progress, int *no_progress_loops)
{
......

		available = reclaimable = zone_reclaimable_pages(zone);
		available += zone_page_state_snapshot(zone, NR_FREE_PAGES);

		/*
		 * Would the allocation succeed if we reclaimed all
		 * reclaimable pages?
		 */
		wmark = __zone_watermark_ok(zone, order, min_wmark,
				ac->highest_zoneidx, alloc_flags, available);

compaction_zonelist_suitable() function has the same problem.
bool compaction_zonelist_suitable(struct alloc_context *ac, int order,
		int alloc_flags)
{
......
		available = zone_reclaimable_pages(zone) / order;
		available += zone_page_state_snapshot(zone, NR_FREE_PAGES);
		if (__compaction_suitable(zone, order, min_wmark_pages(zone),
					  ac->highest_zoneidx, available))

If this is problematic, can it be modified as follows:
diff --git a/mm/vmscan.c b/mm/vmscan.c
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -6417,7 +6417,7 @@ static bool allow_direct_reclaim(pg_data_t *pgdat)
                return true;
 
        for_each_managed_zone_pgdat(zone, pgdat, i, ZONE_NORMAL) {
-               if (!zone_reclaimable_pages(zone))
+               if (!zone_reclaimable_pages(zone) || !(zone_page_state_snapshot(zone, NR_FREE_PAGES)))
                        continue;

Signed-off-by: liuqiqi <liuqiqi@kylinos.cn>
Re: CVE-2024-57884 patch review feedback (https://lore.kernel.org/linux-cve-announce/2025011510-CVE-2024-57884-4cf8@gregkh/#R)
Posted by Greg KH 4 months, 2 weeks ago
On Thu, Aug 07, 2025 at 09:05:15PM +0800, liuqiqi@kylinos.cn wrote:
> CVE-2024-57884  patch fixes  mm: vmscan: account for free pages to prevent infinite Loop in throttle_direct_reclaim() modify as follows
> @@ -342,7 +342,14 @@ unsigned long zone_reclaimable_pages(struct zone *zone)
>  	if (get_nr_swap_pages() > 0)
>  		nr += zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_ANON) +
>  			zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_ANON);
> -
> +	/*
> +	 * If there are no reclaimable file-backed or anonymous pages,
> +	 * ensure zones with sufficient free pages are not skipped.
> +	 * This prevents zones like DMA32 from being ignored in reclaim
> +	 * scenarios where they can still help alleviate memory pressure.
> +	 */
> +	if (nr == 0)
> +		nr = zone_page_state_snapshot(zone, NR_FREE_PAGES);
>  	return nr;
>  }
> However, should_reclaim_retry() function calls zone_reclaimable_pages to count free pages. When nr is 0, it double-counts NR_FREE_PAGES. This seems to cause inaccurate page statistics, right?
> static inline bool
> should_reclaim_retry(gfp_t gfp_mask, unsigned order,
> 		     struct alloc_context *ac, int alloc_flags,
> 		     bool did_some_progress, int *no_progress_loops)
> {
> ......
> 
> 		available = reclaimable = zone_reclaimable_pages(zone);
> 		available += zone_page_state_snapshot(zone, NR_FREE_PAGES);
> 
> 		/*
> 		 * Would the allocation succeed if we reclaimed all
> 		 * reclaimable pages?
> 		 */
> 		wmark = __zone_watermark_ok(zone, order, min_wmark,
> 				ac->highest_zoneidx, alloc_flags, available);
> 
> compaction_zonelist_suitable() function has the same problem.
> bool compaction_zonelist_suitable(struct alloc_context *ac, int order,
> 		int alloc_flags)
> {
> ......
> 		available = zone_reclaimable_pages(zone) / order;
> 		available += zone_page_state_snapshot(zone, NR_FREE_PAGES);
> 		if (__compaction_suitable(zone, order, min_wmark_pages(zone),
> 					  ac->highest_zoneidx, available))
> 
> If this is problematic, can it be modified as follows:
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -6417,7 +6417,7 @@ static bool allow_direct_reclaim(pg_data_t *pgdat)
>                 return true;
>  
>         for_each_managed_zone_pgdat(zone, pgdat, i, ZONE_NORMAL) {
> -               if (!zone_reclaimable_pages(zone))
> +               if (!zone_reclaimable_pages(zone) || !(zone_page_state_snapshot(zone, NR_FREE_PAGES)))
>                         continue;
> 
> Signed-off-by: liuqiqi <liuqiqi@kylinos.cn>

I have no idea what you are asking about or wishing to see change.
Please read the kernel documentation for how to send a proper patch.

thanks,

greg k-h