[PATCH V3 2/2] mm/khugepaged: retry with sync writeback for MADV_COLLAPSE

Shivank Garg posted 2 patches 2 months, 1 week ago
There is a newer version of this series
[PATCH V3 2/2] mm/khugepaged: retry with sync writeback for MADV_COLLAPSE
Posted by Shivank Garg 2 months, 1 week ago
When MADV_COLLAPSE is called on file-backed mappings (e.g., executable
text sections), the pages may still be dirty from recent writes.
collapse_file() will trigger async writeback and fail with
SCAN_PAGE_DIRTY_OR_WRITEBACK (-EAGAIN).

MADV_COLLAPSE is a synchronous operation where userspace expects
immediate results. If the collapse fails due to dirty pages, perform
synchronous writeback on the specific range and retry once.

This avoids spurious failures for freshly written executables while
avoiding unnecessary synchronous I/O for mappings that are already clean.

Reported-by: Branden Moore <Branden.Moore@amd.com>
Closes: https://lore.kernel.org/all/4e26fe5e-7374-467c-a333-9dd48f85d7cc@amd.com
Fixes: 34488399fa08 ("mm/madvise: add file and shmem support to MADV_COLLAPSE")
Suggested-by: David Hildenbrand <david@kernel.org>
Signed-off-by: Shivank Garg <shivankg@amd.com>
---
 mm/khugepaged.c | 41 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 41 insertions(+)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 219dfa2e523c..7a12e9ef30b4 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -22,6 +22,7 @@
 #include <linux/dax.h>
 #include <linux/ksm.h>
 #include <linux/pgalloc.h>
+#include <linux/backing-dev.h>
 
 #include <asm/tlb.h>
 #include "internal.h"
@@ -2787,9 +2788,11 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start,
 	hend = end & HPAGE_PMD_MASK;
 
 	for (addr = hstart; addr < hend; addr += HPAGE_PMD_SIZE) {
+		bool retried = false;
 		int result = SCAN_FAIL;
 
 		if (!mmap_locked) {
+retry:
 			cond_resched();
 			mmap_read_lock(mm);
 			mmap_locked = true;
@@ -2819,6 +2822,44 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start,
 		if (!mmap_locked)
 			*lock_dropped = true;
 
+		/*
+		 * If the file-backed VMA has dirty pages, the scan triggers
+		 * async writeback and returns SCAN_PAGE_DIRTY_OR_WRITEBACK.
+		 * Since MADV_COLLAPSE is sync, we force sync writeback and
+		 * retry once.
+		 */
+		if (result == SCAN_PAGE_DIRTY_OR_WRITEBACK && !retried) {
+			/*
+			 * File scan drops the lock. We must re-acquire it to
+			 * safely inspect the VMA and hold the file reference.
+			 */
+			if (!mmap_locked) {
+				cond_resched();
+				mmap_read_lock(mm);
+				mmap_locked = true;
+				result = hugepage_vma_revalidate(mm, addr, false, &vma, cc);
+				if (result != SCAN_SUCCEED)
+					goto handle_result;
+			}
+
+			if (!vma_is_anonymous(vma) && vma->vm_file &&
+			    mapping_can_writeback(vma->vm_file->f_mapping)) {
+				struct file *file = get_file(vma->vm_file);
+				pgoff_t pgoff = linear_page_index(vma, addr);
+				loff_t lstart = (loff_t)pgoff << PAGE_SHIFT;
+				loff_t lend = lstart + HPAGE_PMD_SIZE - 1;
+
+				mmap_read_unlock(mm);
+				mmap_locked = false;
+				*lock_dropped = true;
+				filemap_write_and_wait_range(file->f_mapping, lstart, lend);
+				fput(file);
+				retried = true;
+				goto retry;
+			}
+		}
+
+
 handle_result:
 		switch (result) {
 		case SCAN_SUCCEED:
-- 
2.43.0
Re: [PATCH V3 2/2] mm/khugepaged: retry with sync writeback for MADV_COLLAPSE
Posted by Lance Yang 2 months, 1 week ago

On 2025/12/2 02:56, Shivank Garg wrote:
> When MADV_COLLAPSE is called on file-backed mappings (e.g., executable
> text sections), the pages may still be dirty from recent writes.
> collapse_file() will trigger async writeback and fail with
> SCAN_PAGE_DIRTY_OR_WRITEBACK (-EAGAIN).
> 
> MADV_COLLAPSE is a synchronous operation where userspace expects
> immediate results. If the collapse fails due to dirty pages, perform
> synchronous writeback on the specific range and retry once.
> 
> This avoids spurious failures for freshly written executables while
> avoiding unnecessary synchronous I/O for mappings that are already clean.
> 
> Reported-by: Branden Moore <Branden.Moore@amd.com>
> Closes: https://lore.kernel.org/all/4e26fe5e-7374-467c-a333-9dd48f85d7cc@amd.com
> Fixes: 34488399fa08 ("mm/madvise: add file and shmem support to MADV_COLLAPSE")
> Suggested-by: David Hildenbrand <david@kernel.org>
> Signed-off-by: Shivank Garg <shivankg@amd.com>
> ---
>   mm/khugepaged.c | 41 +++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 41 insertions(+)
> 
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 219dfa2e523c..7a12e9ef30b4 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -22,6 +22,7 @@
>   #include <linux/dax.h>
>   #include <linux/ksm.h>
>   #include <linux/pgalloc.h>
> +#include <linux/backing-dev.h>
>   
>   #include <asm/tlb.h>
>   #include "internal.h"
> @@ -2787,9 +2788,11 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start,
>   	hend = end & HPAGE_PMD_MASK;
>   
>   	for (addr = hstart; addr < hend; addr += HPAGE_PMD_SIZE) {
> +		bool retried = false;
>   		int result = SCAN_FAIL;
>   
>   		if (!mmap_locked) {
> +retry:
>   			cond_resched();
>   			mmap_read_lock(mm);
>   			mmap_locked = true;
> @@ -2819,6 +2822,44 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start,
>   		if (!mmap_locked)
>   			*lock_dropped = true;
>   
> +		/*
> +		 * If the file-backed VMA has dirty pages, the scan triggers
> +		 * async writeback and returns SCAN_PAGE_DIRTY_OR_WRITEBACK.
> +		 * Since MADV_COLLAPSE is sync, we force sync writeback and
> +		 * retry once.
> +		 */
> +		if (result == SCAN_PAGE_DIRTY_OR_WRITEBACK && !retried) {
> +			/*
> +			 * File scan drops the lock. We must re-acquire it to
> +			 * safely inspect the VMA and hold the file reference.
> +			 */
> +			if (!mmap_locked) {
> +				cond_resched();
> +				mmap_read_lock(mm);
> +				mmap_locked = true;
> +				result = hugepage_vma_revalidate(mm, addr, false, &vma, cc);
> +				if (result != SCAN_SUCCEED)
> +					goto handle_result;
> +			}
> +
> +			if (!vma_is_anonymous(vma) && vma->vm_file &&
> +			    mapping_can_writeback(vma->vm_file->f_mapping)) {
> +				struct file *file = get_file(vma->vm_file);
> +				pgoff_t pgoff = linear_page_index(vma, addr);
> +				loff_t lstart = (loff_t)pgoff << PAGE_SHIFT;
> +				loff_t lend = lstart + HPAGE_PMD_SIZE - 1;
> +
> +				mmap_read_unlock(mm);
> +				mmap_locked = false;
> +				*lock_dropped = true;
> +				filemap_write_and_wait_range(file->f_mapping, lstart, lend);
> +				fput(file);
> +				retried = true;
> +				goto retry;
> +			}
> +		}
> +
> +

Nit: spurious blank line.

>   handle_result:
>   		switch (result) {
>   		case SCAN_SUCCEED:
Re: [PATCH V3 2/2] mm/khugepaged: retry with sync writeback for MADV_COLLAPSE
Posted by Garg, Shivank 2 months ago

On 12/2/2025 10:20 AM, Lance Yang wrote:
> 
> 
> On 2025/12/2 02:56, Shivank Garg wrote:
>> When MADV_COLLAPSE is called on file-backed mappings (e.g., executable
>> text sections), the pages may still be dirty from recent writes.
>> collapse_file() will trigger async writeback and fail with
>> SCAN_PAGE_DIRTY_OR_WRITEBACK (-EAGAIN).
>>
>> MADV_COLLAPSE is a synchronous operation where userspace expects
>> immediate results. If the collapse fails due to dirty pages, perform
>> synchronous writeback on the specific range and retry once.
>>
>> This avoids spurious failures for freshly written executables while
>> avoiding unnecessary synchronous I/O for mappings that are already clean.
>>
>> Reported-by: Branden Moore <Branden.Moore@amd.com>
>> Closes: https://lore.kernel.org/all/4e26fe5e-7374-467c-a333-9dd48f85d7cc@amd.com
>> Fixes: 34488399fa08 ("mm/madvise: add file and shmem support to MADV_COLLAPSE")
>> Suggested-by: David Hildenbrand <david@kernel.org>
>> Signed-off-by: Shivank Garg <shivankg@amd.com>
>> ---
>>   mm/khugepaged.c | 41 +++++++++++++++++++++++++++++++++++++++++
>>   1 file changed, 41 insertions(+)
>>
>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>> index 219dfa2e523c..7a12e9ef30b4 100644
>> --- a/mm/khugepaged.c
>> +++ b/mm/khugepaged.c
>> @@ -22,6 +22,7 @@
>>   #include <linux/dax.h>
>>   #include <linux/ksm.h>
>>   #include <linux/pgalloc.h>
>> +#include <linux/backing-dev.h>
>>     #include <asm/tlb.h>
>>   #include "internal.h"
>> @@ -2787,9 +2788,11 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start,
>>       hend = end & HPAGE_PMD_MASK;
>>         for (addr = hstart; addr < hend; addr += HPAGE_PMD_SIZE) {
>> +        bool retried = false;
>>           int result = SCAN_FAIL;
>>             if (!mmap_locked) {
>> +retry:
>>               cond_resched();
>>               mmap_read_lock(mm);
>>               mmap_locked = true;
>> @@ -2819,6 +2822,44 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start,
>>           if (!mmap_locked)
>>               *lock_dropped = true;
>>   +        /*
>> +         * If the file-backed VMA has dirty pages, the scan triggers
>> +         * async writeback and returns SCAN_PAGE_DIRTY_OR_WRITEBACK.
>> +         * Since MADV_COLLAPSE is sync, we force sync writeback and
>> +         * retry once.
>> +         */
>> +        if (result == SCAN_PAGE_DIRTY_OR_WRITEBACK && !retried) {
>> +            /*
>> +             * File scan drops the lock. We must re-acquire it to
>> +             * safely inspect the VMA and hold the file reference.
>> +             */
>> +            if (!mmap_locked) {
>> +                cond_resched();
>> +                mmap_read_lock(mm);
>> +                mmap_locked = true;
>> +                result = hugepage_vma_revalidate(mm, addr, false, &vma, cc);
>> +                if (result != SCAN_SUCCEED)
>> +                    goto handle_result;
>> +            }
>> +
>> +            if (!vma_is_anonymous(vma) && vma->vm_file &&
>> +                mapping_can_writeback(vma->vm_file->f_mapping)) {
>> +                struct file *file = get_file(vma->vm_file);
>> +                pgoff_t pgoff = linear_page_index(vma, addr);
>> +                loff_t lstart = (loff_t)pgoff << PAGE_SHIFT;
>> +                loff_t lend = lstart + HPAGE_PMD_SIZE - 1;
>> +
>> +                mmap_read_unlock(mm);
>> +                mmap_locked = false;
>> +                *lock_dropped = true;
>> +                filemap_write_and_wait_range(file->f_mapping, lstart, lend);
>> +                fput(file);
>> +                retried = true;
>> +                goto retry;
>> +            }
>> +        }
>> +
>> +
> 
> Nit: spurious blank line.

Ah, I completely missed this. I’ll fix it in the next version.
Hope the rest of the patch looks reasonable. Thanks for the review.

Thanks,
Shivank
> 
>>   handle_result:
>>           switch (result) {
>>           case SCAN_SUCCEED:
> 

Re: [PATCH V3 2/2] mm/khugepaged: retry with sync writeback for MADV_COLLAPSE
Posted by Lance Yang 2 months ago

On 2025/12/4 02:25, Garg, Shivank wrote:
> 
> 
> On 12/2/2025 10:20 AM, Lance Yang wrote:
>>
>>
>> On 2025/12/2 02:56, Shivank Garg wrote:
>>> When MADV_COLLAPSE is called on file-backed mappings (e.g., executable
>>> text sections), the pages may still be dirty from recent writes.
>>> collapse_file() will trigger async writeback and fail with
>>> SCAN_PAGE_DIRTY_OR_WRITEBACK (-EAGAIN).
>>>
>>> MADV_COLLAPSE is a synchronous operation where userspace expects
>>> immediate results. If the collapse fails due to dirty pages, perform
>>> synchronous writeback on the specific range and retry once.
>>>
>>> This avoids spurious failures for freshly written executables while
>>> avoiding unnecessary synchronous I/O for mappings that are already clean.
>>>
>>> Reported-by: Branden Moore <Branden.Moore@amd.com>
>>> Closes: https://lore.kernel.org/all/4e26fe5e-7374-467c-a333-9dd48f85d7cc@amd.com
>>> Fixes: 34488399fa08 ("mm/madvise: add file and shmem support to MADV_COLLAPSE")
>>> Suggested-by: David Hildenbrand <david@kernel.org>
>>> Signed-off-by: Shivank Garg <shivankg@amd.com>
>>> ---
>>>    mm/khugepaged.c | 41 +++++++++++++++++++++++++++++++++++++++++
>>>    1 file changed, 41 insertions(+)
>>>
>>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>>> index 219dfa2e523c..7a12e9ef30b4 100644
>>> --- a/mm/khugepaged.c
>>> +++ b/mm/khugepaged.c
>>> @@ -22,6 +22,7 @@
>>>    #include <linux/dax.h>
>>>    #include <linux/ksm.h>
>>>    #include <linux/pgalloc.h>
>>> +#include <linux/backing-dev.h>
>>>      #include <asm/tlb.h>
>>>    #include "internal.h"
>>> @@ -2787,9 +2788,11 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start,
>>>        hend = end & HPAGE_PMD_MASK;
>>>          for (addr = hstart; addr < hend; addr += HPAGE_PMD_SIZE) {
>>> +        bool retried = false;
>>>            int result = SCAN_FAIL;
>>>              if (!mmap_locked) {
>>> +retry:
>>>                cond_resched();
>>>                mmap_read_lock(mm);
>>>                mmap_locked = true;
>>> @@ -2819,6 +2822,44 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start,
>>>            if (!mmap_locked)
>>>                *lock_dropped = true;
>>>    +        /*
>>> +         * If the file-backed VMA has dirty pages, the scan triggers
>>> +         * async writeback and returns SCAN_PAGE_DIRTY_OR_WRITEBACK.
>>> +         * Since MADV_COLLAPSE is sync, we force sync writeback and
>>> +         * retry once.
>>> +         */
>>> +        if (result == SCAN_PAGE_DIRTY_OR_WRITEBACK && !retried) {
>>> +            /*
>>> +             * File scan drops the lock. We must re-acquire it to
>>> +             * safely inspect the VMA and hold the file reference.
>>> +             */
>>> +            if (!mmap_locked) {
>>> +                cond_resched();
>>> +                mmap_read_lock(mm);
>>> +                mmap_locked = true;
>>> +                result = hugepage_vma_revalidate(mm, addr, false, &vma, cc);
>>> +                if (result != SCAN_SUCCEED)
>>> +                    goto handle_result;
>>> +            }
>>> +
>>> +            if (!vma_is_anonymous(vma) && vma->vm_file &&
>>> +                mapping_can_writeback(vma->vm_file->f_mapping)) {
>>> +                struct file *file = get_file(vma->vm_file);
>>> +                pgoff_t pgoff = linear_page_index(vma, addr);
>>> +                loff_t lstart = (loff_t)pgoff << PAGE_SHIFT;
>>> +                loff_t lend = lstart + HPAGE_PMD_SIZE - 1;
>>> +
>>> +                mmap_read_unlock(mm);
>>> +                mmap_locked = false;
>>> +                *lock_dropped = true;
>>> +                filemap_write_and_wait_range(file->f_mapping, lstart, lend);
>>> +                fput(file);
>>> +                retried = true;
>>> +                goto retry;
>>> +            }
>>> +        }
>>> +
>>> +
>>
>> Nit: spurious blank line.
> 
> Ah, I completely missed this. I’ll fix it in the next version.
> Hope the rest of the patch looks reasonable. Thanks for the review.

Apart from that nit, nothing else jumped out at me :)

Confirmed that the spurious EINVAL is gone, and it works as expected ;p

Tested-by: Lance Yang <lance.yang@linux.dev>

[...]

Cheers,
Lance