[PATCH 1/4] mm: swap: move nr_swap_pages counter decrement from folio_alloc_swap() to swap_range_alloc()

Kemeng Shi posted 4 patches 6 months, 3 weeks ago
[PATCH 1/4] mm: swap: move nr_swap_pages counter decrement from folio_alloc_swap() to swap_range_alloc()
Posted by Kemeng Shi 6 months, 3 weeks ago
When folio_alloc_swap() encounters a failure in either
mem_cgroup_try_charge_swap() or add_to_swap_cache(), nr_swap_pages counter
is not decremented for allocated entry. However, the following
put_swap_folio() will increase nr_swap_pages counter unpairly and lead to
an imbalance.

Move nr_swap_pages decrement from folio_alloc_swap() to swap_range_alloc()
to pair the nr_swap_pages counting.

Fixes: 0ff67f990bd45 ("mm, swap: remove swap slot cache")
Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
---
 mm/swapfile.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/swapfile.c b/mm/swapfile.c
index 026090bf3efe..75b69213c2e7 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1115,6 +1115,7 @@ static void swap_range_alloc(struct swap_info_struct *si,
 		if (vm_swap_full())
 			schedule_work(&si->reclaim_work);
 	}
+	atomic_long_sub(nr_entries, &nr_swap_pages);
 }
 
 static void swap_range_free(struct swap_info_struct *si, unsigned long offset,
@@ -1313,7 +1314,6 @@ int folio_alloc_swap(struct folio *folio, gfp_t gfp)
 	if (add_to_swap_cache(folio, entry, gfp | __GFP_NOMEMALLOC, NULL))
 		goto out_free;
 
-	atomic_long_sub(size, &nr_swap_pages);
 	return 0;
 
 out_free:
-- 
2.30.0
Re: [PATCH 1/4] mm: swap: move nr_swap_pages counter decrement from folio_alloc_swap() to swap_range_alloc()
Posted by Baoquan He 6 months, 2 weeks ago
On 05/22/25 at 08:25pm, Kemeng Shi wrote:
> When folio_alloc_swap() encounters a failure in either
> mem_cgroup_try_charge_swap() or add_to_swap_cache(), nr_swap_pages counter
> is not decremented for allocated entry. However, the following
> put_swap_folio() will increase nr_swap_pages counter unpairly and lead to
> an imbalance.
> 
> Move nr_swap_pages decrement from folio_alloc_swap() to swap_range_alloc()
> to pair the nr_swap_pages counting.
> 
> Fixes: 0ff67f990bd45 ("mm, swap: remove swap slot cache")
> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
> ---
>  mm/swapfile.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

LGTM,

Reviewed-by: Baoquan He <bhe@redhat.com>

> 
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index 026090bf3efe..75b69213c2e7 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -1115,6 +1115,7 @@ static void swap_range_alloc(struct swap_info_struct *si,
>  		if (vm_swap_full())
>  			schedule_work(&si->reclaim_work);
>  	}
> +	atomic_long_sub(nr_entries, &nr_swap_pages);
>  }
>  
>  static void swap_range_free(struct swap_info_struct *si, unsigned long offset,
> @@ -1313,7 +1314,6 @@ int folio_alloc_swap(struct folio *folio, gfp_t gfp)
>  	if (add_to_swap_cache(folio, entry, gfp | __GFP_NOMEMALLOC, NULL))
>  		goto out_free;
>  
> -	atomic_long_sub(size, &nr_swap_pages);
>  	return 0;
>  
>  out_free:
> -- 
> 2.30.0
>
Re: [PATCH 1/4] mm: swap: move nr_swap_pages counter decrement from folio_alloc_swap() to swap_range_alloc()
Posted by Kairui Song 6 months, 3 weeks ago
On Thu, May 22, 2025 at 11:32 AM Kemeng Shi <shikemeng@huaweicloud.com> wrote:
>
> When folio_alloc_swap() encounters a failure in either
> mem_cgroup_try_charge_swap() or add_to_swap_cache(), nr_swap_pages counter
> is not decremented for allocated entry. However, the following
> put_swap_folio() will increase nr_swap_pages counter unpairly and lead to
> an imbalance.
>
> Move nr_swap_pages decrement from folio_alloc_swap() to swap_range_alloc()
> to pair the nr_swap_pages counting.
>
> Fixes: 0ff67f990bd45 ("mm, swap: remove swap slot cache")
> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
> ---
>  mm/swapfile.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index 026090bf3efe..75b69213c2e7 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -1115,6 +1115,7 @@ static void swap_range_alloc(struct swap_info_struct *si,
>                 if (vm_swap_full())
>                         schedule_work(&si->reclaim_work);
>         }
> +       atomic_long_sub(nr_entries, &nr_swap_pages);
>  }
>
>  static void swap_range_free(struct swap_info_struct *si, unsigned long offset,
> @@ -1313,7 +1314,6 @@ int folio_alloc_swap(struct folio *folio, gfp_t gfp)
>         if (add_to_swap_cache(folio, entry, gfp | __GFP_NOMEMALLOC, NULL))
>                 goto out_free;
>
> -       atomic_long_sub(size, &nr_swap_pages);
>         return 0;
>
>  out_free:
> --
> 2.30.0

Good catch! Moving the counter update to swap_range_alloc makes the
logic much easier to follow.

Reviewed-by: Kairui Song <kasong@tencent.com>