[PATCH] mm: add cond_resched() in swapin_walk_pmd_entry()

Kefeng Wang posted 1 patch 2 years, 9 months ago
mm/madvise.c | 1 +
1 file changed, 1 insertion(+)
[PATCH] mm: add cond_resched() in swapin_walk_pmd_entry()
Posted by Kefeng Wang 2 years, 9 months ago
When handle MADV_WILLNEED in madvise(), the soflockup may be occurred
in swapin_walk_pmd_entry() if swapin lots of memory on slow device,
add a cond_resched() into it to avoid the possible softlockup.

Fixes: 1998cc048901 ("mm: make madvise(MADV_WILLNEED) support swap file prefetch")
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/madvise.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/mm/madvise.c b/mm/madvise.c
index b913ba6efc10..fea589d8a2fb 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -226,6 +226,7 @@ static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long start,
 			put_page(page);
 	}
 	swap_read_unplug(splug);
+	cond_resched();
 
 	return 0;
 }
-- 
2.35.3
Re: [PATCH] mm: add cond_resched() in swapin_walk_pmd_entry()
Posted by Andrew Morton 2 years, 9 months ago
On Mon, 5 Dec 2022 22:03:27 +0800 Kefeng Wang <wangkefeng.wang@huawei.com> wrote:

> When handle MADV_WILLNEED in madvise(), the soflockup may be occurred
> in swapin_walk_pmd_entry() if swapin lots of memory on slow device,
> add a cond_resched() into it to avoid the possible softlockup.
> 
> ...
>
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -226,6 +226,7 @@ static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long start,
>  			put_page(page);
>  	}
>  	swap_read_unplug(splug);
> +	cond_resched();
>  
>  	return 0;
>  }

I wonder if this would be better in walk_pmd_range(), to address other
very large walk attempts.
Re: [PATCH] mm: add cond_resched() in swapin_walk_pmd_entry()
Posted by Kefeng Wang 2 years, 9 months ago
On 2022/12/6 5:03, Andrew Morton wrote:
> On Mon, 5 Dec 2022 22:03:27 +0800 Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
>
>> When handle MADV_WILLNEED in madvise(), the soflockup may be occurred
>> in swapin_walk_pmd_entry() if swapin lots of memory on slow device,
>> add a cond_resched() into it to avoid the possible softlockup.
>>
>> ...
>>
>> --- a/mm/madvise.c
>> +++ b/mm/madvise.c
>> @@ -226,6 +226,7 @@ static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long start,
>>   			put_page(page);
>>   	}
>>   	swap_read_unplug(splug);
>> +	cond_resched();
>>   
>>   	return 0;
>>   }
> I wonder if this would be better in walk_pmd_range(), to address other
> very large walk attempts.
mm/madvise.c:287:       walk_page_range(vma->vm_mm, start, end, 
&swapin_walk_ops, vma);
mm/madvise.c:514:       walk_page_range(vma->vm_mm, addr, end, 
&cold_walk_ops, &walk_private);

mm/madvise.c:762:       walk_page_range(vma->vm_mm, range.start, range.end,
mm/madvise.c-763-                       &madvise_free_walk_ops, &tlb);

The cold_walk_ops and madvise_free_walk_ops are already with cond_resched()
in theirs pmd_entry walk, maybe there's no need for a precautionary increase
a cond_resched() for now


>
> .