[PATCH v2 3/9] mm/swap: avoid doing extra unlock error checks for direct swapin

Kairui Song posted 9 patches 1 year, 11 months ago
There is a newer version of this series
[PATCH v2 3/9] mm/swap: avoid doing extra unlock error checks for direct swapin
Posted by Kairui Song 1 year, 11 months ago
From: Kairui Song <kasong@tencent.com>

When swapping in a page, mem_cgroup_swapin_charge_folio is called for
new allocated folio, nothing else is referencing the folio so no need
to set the lock bit early. This avoided doing extra unlock checks
on the error path.

Signed-off-by: Kairui Song <kasong@tencent.com>
---
 mm/swap_state.c | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/mm/swap_state.c b/mm/swap_state.c
index 24cb93ed5081..6130de8d5226 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -881,16 +881,15 @@ struct folio *swapin_direct(swp_entry_t entry, gfp_t gfp_mask,
 	folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0,
 				vma, vmf->address, false);
 	if (folio) {
-		__folio_set_locked(folio);
-		__folio_set_swapbacked(folio);
-
-		if (mem_cgroup_swapin_charge_folio(folio,
-					vma->vm_mm, GFP_KERNEL,
-					entry)) {
-			folio_unlock(folio);
+		if (mem_cgroup_swapin_charge_folio(folio, vma->vm_mm,
+						   GFP_KERNEL, entry)) {
 			folio_put(folio);
 			return NULL;
 		}
+
+		__folio_set_locked(folio);
+		__folio_set_swapbacked(folio);
+
 		mem_cgroup_swapin_uncharge_swap(entry);
 
 		shadow = get_shadow_from_swap_cache(entry);
-- 
2.43.0
Re: [PATCH v2 3/9] mm/swap: avoid doing extra unlock error checks for direct swapin
Posted by Huang, Ying 1 year, 11 months ago
Kairui Song <ryncsn@gmail.com> writes:

> From: Kairui Song <kasong@tencent.com>
>
> When swapping in a page, mem_cgroup_swapin_charge_folio is called for
> new allocated folio, nothing else is referencing the folio so no need
> to set the lock bit early. This avoided doing extra unlock checks
> on the error path.
>
> Signed-off-by: Kairui Song <kasong@tencent.com>
> ---
>  mm/swap_state.c | 13 ++++++-------
>  1 file changed, 6 insertions(+), 7 deletions(-)
>
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index 24cb93ed5081..6130de8d5226 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -881,16 +881,15 @@ struct folio *swapin_direct(swp_entry_t entry, gfp_t gfp_mask,
>  	folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0,
>  				vma, vmf->address, false);
>  	if (folio) {
> -		__folio_set_locked(folio);
> -		__folio_set_swapbacked(folio);
> -
> -		if (mem_cgroup_swapin_charge_folio(folio,
> -					vma->vm_mm, GFP_KERNEL,
> -					entry)) {
> -			folio_unlock(folio);
> +		if (mem_cgroup_swapin_charge_folio(folio, vma->vm_mm,
> +						   GFP_KERNEL, entry)) {
>  			folio_put(folio);
>  			return NULL;
>  		}
> +
> +		__folio_set_locked(folio);
> +		__folio_set_swapbacked(folio);
> +
>  		mem_cgroup_swapin_uncharge_swap(entry);
>  
>  		shadow = get_shadow_from_swap_cache(entry);

I don't find any issue with the patch.  But another caller of
mem_cgroup_swapin_charge_folio() in __read_swap_cache_async() setups
newly allocated folio in the same way before the change.  Better to keep
them same?  Because the benefit of change is small too.

--
Best Regards,
Huang, Ying
Re: [PATCH v2 3/9] mm/swap: avoid doing extra unlock error checks for direct swapin
Posted by Kairui Song 1 year, 11 months ago
Huang, Ying <ying.huang@intel.com> 于2024年1月4日周四 16:12写道:
>
> Kairui Song <ryncsn@gmail.com> writes:
>
> > From: Kairui Song <kasong@tencent.com>
> >
> > When swapping in a page, mem_cgroup_swapin_charge_folio is called for
> > new allocated folio, nothing else is referencing the folio so no need
> > to set the lock bit early. This avoided doing extra unlock checks
> > on the error path.
> >
> > Signed-off-by: Kairui Song <kasong@tencent.com>
> > ---
> >  mm/swap_state.c | 13 ++++++-------
> >  1 file changed, 6 insertions(+), 7 deletions(-)
> >
> > diff --git a/mm/swap_state.c b/mm/swap_state.c
> > index 24cb93ed5081..6130de8d5226 100644
> > --- a/mm/swap_state.c
> > +++ b/mm/swap_state.c
> > @@ -881,16 +881,15 @@ struct folio *swapin_direct(swp_entry_t entry, gfp_t gfp_mask,
> >       folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0,
> >                               vma, vmf->address, false);
> >       if (folio) {
> > -             __folio_set_locked(folio);
> > -             __folio_set_swapbacked(folio);
> > -
> > -             if (mem_cgroup_swapin_charge_folio(folio,
> > -                                     vma->vm_mm, GFP_KERNEL,
> > -                                     entry)) {
> > -                     folio_unlock(folio);
> > +             if (mem_cgroup_swapin_charge_folio(folio, vma->vm_mm,
> > +                                                GFP_KERNEL, entry)) {
> >                       folio_put(folio);
> >                       return NULL;
> >               }
> > +
> > +             __folio_set_locked(folio);
> > +             __folio_set_swapbacked(folio);
> > +
> >               mem_cgroup_swapin_uncharge_swap(entry);
> >
> >               shadow = get_shadow_from_swap_cache(entry);
>
> I don't find any issue with the patch.  But another caller of
> mem_cgroup_swapin_charge_folio() in __read_swap_cache_async() setups
> newly allocated folio in the same way before the change.  Better to keep
> them same?  Because the benefit of change is small too.

OK, this is just a trivial optimization, I can drop it.