[PATCH V2] mm/swap: take folio refcount after testing the LRU flag

yangge1116@126.com posted 1 patch 1 year, 5 months ago
There is a newer version of this series
mm/swap.c | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)
[PATCH V2] mm/swap: take folio refcount after testing the LRU flag
Posted by yangge1116@126.com 1 year, 5 months ago
From: yangge <yangge1116@126.com>

Whoever passes a folio to __folio_batch_add_and_move() must hold
a reference, otherwise something else would already be messed up.
If the folio is referenced, it will not be freed elsewhere, so we
can safely clear the folio's lru flag. As discussed with David
in [1], we should take the reference after testing the LRU flag,
not before.

Link: https://lore.kernel.org/lkml/d41865b4-d6fa-49ba-890a-921eefad27dd@redhat.com/ [1]
Signed-off-by: yangge <yangge1116@126.com>
Acked-by: David Hildenbrand <david@redhat.com>
---
 mm/swap.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

V2:
   Add sanity check suggested by David

diff --git a/mm/swap.c b/mm/swap.c
index 67a2467..c048659 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -226,12 +226,11 @@ static void __folio_batch_add_and_move(struct folio_batch __percpu *fbatch,
 {
 	unsigned long flags;
 
-	folio_get(folio);
-
-	if (on_lru && !folio_test_clear_lru(folio)) {
-		folio_put(folio);
+	if (on_lru && !folio_test_clear_lru(folio))
 		return;
-	}
+
+	VM_WARN_ON_ONCE(!folio_ref_count(folio));
+	folio_get(folio);
 
 	if (disable_irq)
 		local_lock_irqsave(&cpu_fbatches.lock_irq, flags);
-- 
2.7.4
Re: [PATCH V2] mm/swap: take folio refcount after testing the LRU flag
Posted by Yu Zhao 1 year, 5 months ago
On Mon, Aug 12, 2024 at 5:22 AM <yangge1116@126.com> wrote:
>
> From: yangge <yangge1116@126.com>
>
> Whoever passes a folio to __folio_batch_add_and_move() must hold
> a reference, otherwise something else would already be messed up.
> If the folio is referenced, it will not be freed elsewhere, so we
> can safely clear the folio's lru flag. As discussed with David
> in [1], we should take the reference after testing the LRU flag,
> not before.
>
> Link: https://lore.kernel.org/lkml/d41865b4-d6fa-49ba-890a-921eefad27dd@redhat.com/ [1]
> Signed-off-by: yangge <yangge1116@126.com>
> Acked-by: David Hildenbrand <david@redhat.com>
> ---
>  mm/swap.c | 9 ++++-----
>  1 file changed, 4 insertions(+), 5 deletions(-)
>
> V2:
>    Add sanity check suggested by David
>
> diff --git a/mm/swap.c b/mm/swap.c
> index 67a2467..c048659 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -226,12 +226,11 @@ static void __folio_batch_add_and_move(struct folio_batch __percpu *fbatch,
>  {
>         unsigned long flags;
>
> -       folio_get(folio);
> -
> -       if (on_lru && !folio_test_clear_lru(folio)) {
> -               folio_put(folio);
> +       if (on_lru && !folio_test_clear_lru(folio))
>                 return;
> -       }
> +
> +       VM_WARN_ON_ONCE(!folio_ref_count(folio));
> +       folio_get(folio);

No need to check folio_ref_count() here, because folio_get() already
does it with a better check folio_ref_zero_or_close_to_overflow().
Re: [PATCH V2] mm/swap: take folio refcount after testing the LRU flag
Posted by David Hildenbrand 1 year, 5 months ago
On 12.08.24 21:06, Yu Zhao wrote:
> On Mon, Aug 12, 2024 at 5:22 AM <yangge1116@126.com> wrote:
>>
>> From: yangge <yangge1116@126.com>
>>
>> Whoever passes a folio to __folio_batch_add_and_move() must hold
>> a reference, otherwise something else would already be messed up.
>> If the folio is referenced, it will not be freed elsewhere, so we
>> can safely clear the folio's lru flag. As discussed with David
>> in [1], we should take the reference after testing the LRU flag,
>> not before.
>>
>> Link: https://lore.kernel.org/lkml/d41865b4-d6fa-49ba-890a-921eefad27dd@redhat.com/ [1]
>> Signed-off-by: yangge <yangge1116@126.com>
>> Acked-by: David Hildenbrand <david@redhat.com>
>> ---
>>   mm/swap.c | 9 ++++-----
>>   1 file changed, 4 insertions(+), 5 deletions(-)
>>
>> V2:
>>     Add sanity check suggested by David
>>
>> diff --git a/mm/swap.c b/mm/swap.c
>> index 67a2467..c048659 100644
>> --- a/mm/swap.c
>> +++ b/mm/swap.c
>> @@ -226,12 +226,11 @@ static void __folio_batch_add_and_move(struct folio_batch __percpu *fbatch,
>>   {
>>          unsigned long flags;
>>
>> -       folio_get(folio);
>> -
>> -       if (on_lru && !folio_test_clear_lru(folio)) {
>> -               folio_put(folio);
>> +       if (on_lru && !folio_test_clear_lru(folio))
>>                  return;
>> -       }
>> +
>> +       VM_WARN_ON_ONCE(!folio_ref_count(folio));
>> +       folio_get(folio);
> 
> No need to check folio_ref_count() here, because folio_get() already
> does it with a better check folio_ref_zero_or_close_to_overflow().
> 

Ah, good point, thanks!

-- 
Cheers,

David / dhildenb