[PATCH V2 1/2] mm/khugepaged: do synchronous writeback for MADV_COLLAPSE

Shivank Garg posted 2 patches 1 week, 4 days ago
There is a newer version of this series
[PATCH V2 1/2] mm/khugepaged: do synchronous writeback for MADV_COLLAPSE
Posted by Shivank Garg 1 week, 4 days ago
When MADV_COLLAPSE is called on file-backed mappings (e.g., executable
text sections), the pages may still be dirty from recent writes and
cause collapse to fail with -EINVAL. This is particularly problematic
for freshly copied executables on filesystems, where page cache folios
remain dirty until background writeback completes.

The current code in collapse_file() triggers async writeback via
filemap_flush() and expects khugepaged to revisit the page later.
However, MADV_COLLAPSE is a synchronous operation where userspace
expects immediate results.

Perform synchronous writeback in madvise_collapse() before attempting
collapse to avoid failing on first attempt.

Reported-by: Branden Moore <Branden.Moore@amd.com>
Closes: https://lore.kernel.org/all/4e26fe5e-7374-467c-a333-9dd48f85d7cc@amd.com
Fixes: 34488399fa08 ("mm/madvise: add file and shmem support to MADV_COLLAPSE")
Suggested-by: David Hildenbrand <david@kernel.org>
Signed-off-by: Shivank Garg <shivankg@amd.com>
---
 mm/khugepaged.c | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 97d1b2824386..066a332c76ad 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -22,6 +22,7 @@
 #include <linux/dax.h>
 #include <linux/ksm.h>
 #include <linux/pgalloc.h>
+#include <linux/backing-dev.h>
 
 #include <asm/tlb.h>
 #include "internal.h"
@@ -2784,6 +2785,31 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start,
 	hstart = (start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK;
 	hend = end & HPAGE_PMD_MASK;
 
+	/*
+	 * For file-backed VMAs, perform synchronous writeback to ensure
+	 * dirty folios are flushed before attempting collapse. This avoids
+	 * failing on the first attempt when freshly-written executable text
+	 * is still dirty in the page cache.
+	 */
+	if (!vma_is_anonymous(vma) && vma->vm_file) {
+		struct address_space *mapping = vma->vm_file->f_mapping;
+
+		if (mapping_can_writeback(mapping)) {
+			pgoff_t pgoff_start = linear_page_index(vma, hstart);
+			pgoff_t pgoff_end = linear_page_index(vma, hend);
+			loff_t lstart = (loff_t)pgoff_start << PAGE_SHIFT;
+			loff_t lend = ((loff_t)pgoff_end << PAGE_SHIFT) - 1;
+
+			mmap_read_unlock(mm);
+			mmap_locked = false;
+
+			if (filemap_write_and_wait_range(mapping, lstart, lend)) {
+				last_fail = SCAN_FAIL;
+				goto out_maybelock;
+			}
+		}
+	}
+
 	for (addr = hstart; addr < hend; addr += HPAGE_PMD_SIZE) {
 		int result = SCAN_FAIL;
 
-- 
2.43.0
Re: [PATCH V2 1/2] mm/khugepaged: do synchronous writeback for MADV_COLLAPSE
Posted by David Hildenbrand (Red Hat) 1 week, 4 days ago
On 11/20/25 07:50, Shivank Garg wrote:
> When MADV_COLLAPSE is called on file-backed mappings (e.g., executable
> text sections), the pages may still be dirty from recent writes and
> cause collapse to fail with -EINVAL. This is particularly problematic
> for freshly copied executables on filesystems, where page cache folios
> remain dirty until background writeback completes.
> 
> The current code in collapse_file() triggers async writeback via
> filemap_flush() and expects khugepaged to revisit the page later.
> However, MADV_COLLAPSE is a synchronous operation where userspace
> expects immediate results.
> 
> Perform synchronous writeback in madvise_collapse() before attempting
> collapse to avoid failing on first attempt.
> 
> Reported-by: Branden Moore <Branden.Moore@amd.com>
> Closes: https://lore.kernel.org/all/4e26fe5e-7374-467c-a333-9dd48f85d7cc@amd.com
> Fixes: 34488399fa08 ("mm/madvise: add file and shmem support to MADV_COLLAPSE")
> Suggested-by: David Hildenbrand <david@kernel.org>
> Signed-off-by: Shivank Garg <shivankg@amd.com>
> ---
>   mm/khugepaged.c | 26 ++++++++++++++++++++++++++
>   1 file changed, 26 insertions(+)
> 
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 97d1b2824386..066a332c76ad 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -22,6 +22,7 @@
>   #include <linux/dax.h>
>   #include <linux/ksm.h>
>   #include <linux/pgalloc.h>
> +#include <linux/backing-dev.h>
>   
>   #include <asm/tlb.h>
>   #include "internal.h"
> @@ -2784,6 +2785,31 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start,
>   	hstart = (start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK;
>   	hend = end & HPAGE_PMD_MASK;
>   
> +	/*
> +	 * For file-backed VMAs, perform synchronous writeback to ensure
> +	 * dirty folios are flushed before attempting collapse. This avoids
> +	 * failing on the first attempt when freshly-written executable text
> +	 * is still dirty in the page cache.
> +	 */
> +	if (!vma_is_anonymous(vma) && vma->vm_file) {
> +		struct address_space *mapping = vma->vm_file->f_mapping;
> +
> +		if (mapping_can_writeback(mapping)) {
> +			pgoff_t pgoff_start = linear_page_index(vma, hstart);
> +			pgoff_t pgoff_end = linear_page_index(vma, hend);
> +			loff_t lstart = (loff_t)pgoff_start << PAGE_SHIFT;
> +			loff_t lend = ((loff_t)pgoff_end << PAGE_SHIFT) - 1;
> +

Hm, so we always do that, without any indication that there actually is 
something dirty there.

Internally filemap_write_and_wait_range() uses something called 
mapping_needs_writeback(), but it also applies to the complete file, not 
a range.

Wouldn't it be better do do that only if we detect that there is 
actually a dirty folio in the range?

That is, if we find any dirty folio in hpage_collapse_scan_file() and we 
are in madvise, do that dance here and retry?

-- 
Cheers

David
Re: [PATCH V2 1/2] mm/khugepaged: do synchronous writeback for MADV_COLLAPSE
Posted by Garg, Shivank 1 week, 3 days ago

On 11/20/2025 7:05 PM, David Hildenbrand (Red Hat) wrote:
> On 11/20/25 07:50, Shivank Garg wrote:
>> When MADV_COLLAPSE is called on file-backed mappings (e.g., executable
>> text sections), the pages may still be dirty from recent writes and
>> cause collapse to fail with -EINVAL. This is particularly problematic
>> for freshly copied executables on filesystems, where page cache folios
>> remain dirty until background writeback completes.
>>
>> The current code in collapse_file() triggers async writeback via
>> filemap_flush() and expects khugepaged to revisit the page later.
>> However, MADV_COLLAPSE is a synchronous operation where userspace
>> expects immediate results.
>>
>> Perform synchronous writeback in madvise_collapse() before attempting
>> collapse to avoid failing on first attempt.
>>
>> Reported-by: Branden Moore <Branden.Moore@amd.com>
>> Closes: https://lore.kernel.org/all/4e26fe5e-7374-467c-a333-9dd48f85d7cc@amd.com
>> Fixes: 34488399fa08 ("mm/madvise: add file and shmem support to MADV_COLLAPSE")
>> Suggested-by: David Hildenbrand <david@kernel.org>
>> Signed-off-by: Shivank Garg <shivankg@amd.com>
>> ---
>>   mm/khugepaged.c | 26 ++++++++++++++++++++++++++
>>   1 file changed, 26 insertions(+)
>>
>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>> index 97d1b2824386..066a332c76ad 100644
>> --- a/mm/khugepaged.c
>> +++ b/mm/khugepaged.c
>> @@ -22,6 +22,7 @@
>>   #include <linux/dax.h>
>>   #include <linux/ksm.h>
>>   #include <linux/pgalloc.h>
>> +#include <linux/backing-dev.h>
>>     #include <asm/tlb.h>
>>   #include "internal.h"
>> @@ -2784,6 +2785,31 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start,
>>       hstart = (start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK;
>>       hend = end & HPAGE_PMD_MASK;
>>   +    /*
>> +     * For file-backed VMAs, perform synchronous writeback to ensure
>> +     * dirty folios are flushed before attempting collapse. This avoids
>> +     * failing on the first attempt when freshly-written executable text
>> +     * is still dirty in the page cache.
>> +     */
>> +    if (!vma_is_anonymous(vma) && vma->vm_file) {
>> +        struct address_space *mapping = vma->vm_file->f_mapping;
>> +
>> +        if (mapping_can_writeback(mapping)) {
>> +            pgoff_t pgoff_start = linear_page_index(vma, hstart);
>> +            pgoff_t pgoff_end = linear_page_index(vma, hend);
>> +            loff_t lstart = (loff_t)pgoff_start << PAGE_SHIFT;
>> +            loff_t lend = ((loff_t)pgoff_end << PAGE_SHIFT) - 1;
>> +
> 
> Hm, so we always do that, without any indication that there actually is something dirty there.
> 
> Internally filemap_write_and_wait_range() uses something called mapping_needs_writeback(), but it also applies to the complete file, not a range.
> 
> Wouldn't it be better do do that only if we detect that there is actually a dirty folio in the range?
> 
> That is, if we find any dirty folio in hpage_collapse_scan_file() and we are in madvise, do that dance here and retry?
> 

Good point! This makes sense to me.
I'll send V3 with this approach.

Thanks,
Shivank
Re: [PATCH V2 1/2] mm/khugepaged: do synchronous writeback for MADV_COLLAPSE
Posted by Lance Yang 1 week, 4 days ago

On 2025/11/20 14:50, Shivank Garg wrote:
> When MADV_COLLAPSE is called on file-backed mappings (e.g., executable
> text sections), the pages may still be dirty from recent writes and
> cause collapse to fail with -EINVAL. This is particularly problematic
> for freshly copied executables on filesystems, where page cache folios
> remain dirty until background writeback completes.
> 
> The current code in collapse_file() triggers async writeback via
> filemap_flush() and expects khugepaged to revisit the page later.
> However, MADV_COLLAPSE is a synchronous operation where userspace
> expects immediate results.
> 
> Perform synchronous writeback in madvise_collapse() before attempting
> collapse to avoid failing on first attempt.

Thanks!

> 
> Reported-by: Branden Moore <Branden.Moore@amd.com>
> Closes: https://lore.kernel.org/all/4e26fe5e-7374-467c-a333-9dd48f85d7cc@amd.com
> Fixes: 34488399fa08 ("mm/madvise: add file and shmem support to MADV_COLLAPSE")
> Suggested-by: David Hildenbrand <david@kernel.org>
> Signed-off-by: Shivank Garg <shivankg@amd.com>
> ---
>   mm/khugepaged.c | 26 ++++++++++++++++++++++++++
>   1 file changed, 26 insertions(+)
> 
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 97d1b2824386..066a332c76ad 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -22,6 +22,7 @@
>   #include <linux/dax.h>
>   #include <linux/ksm.h>
>   #include <linux/pgalloc.h>
> +#include <linux/backing-dev.h>
>   
>   #include <asm/tlb.h>
>   #include "internal.h"
> @@ -2784,6 +2785,31 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start,
>   	hstart = (start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK;
>   	hend = end & HPAGE_PMD_MASK;
>   
> +	/*
> +	 * For file-backed VMAs, perform synchronous writeback to ensure
> +	 * dirty folios are flushed before attempting collapse. This avoids
> +	 * failing on the first attempt when freshly-written executable text
> +	 * is still dirty in the page cache.
> +	 */
> +	if (!vma_is_anonymous(vma) && vma->vm_file) {
> +		struct address_space *mapping = vma->vm_file->f_mapping;
> +
> +		if (mapping_can_writeback(mapping)) {
> +			pgoff_t pgoff_start = linear_page_index(vma, hstart);
> +			pgoff_t pgoff_end = linear_page_index(vma, hend);
> +			loff_t lstart = (loff_t)pgoff_start << PAGE_SHIFT;
> +			loff_t lend = ((loff_t)pgoff_end << PAGE_SHIFT) - 1;

It looks like we need to hold a reference to the file here before
dropping the mmap lock :)

			file = get_file(vma->vm_file);

Without it, the vma could be destroyed by a concurrent munmap() while
we are waiting in filemap_write_and_wait_range(), leading to a UAF
on mapping, IIUC ...

> +
> +			mmap_read_unlock(mm);
> +			mmap_locked = false;
> +
> +			if (filemap_write_and_wait_range(mapping, lstart, lend)) {

And drop the reference :)

				fput(file);


> +				last_fail = SCAN_FAIL;
> +				goto out_maybelock;
> +			}

Same here :)

			fput(file);


> +		}
> +	}
> +
>   	for (addr = hstart; addr < hend; addr += HPAGE_PMD_SIZE) {
>   		int result = SCAN_FAIL;
>   

Cheers,
Lance
Re: [PATCH V2 1/2] mm/khugepaged: do synchronous writeback for MADV_COLLAPSE
Posted by Garg, Shivank 1 week, 3 days ago

On 11/20/2025 6:31 PM, Lance Yang wrote:
> 
> 
> On 2025/11/20 14:50, Shivank Garg wrote:
>> When MADV_COLLAPSE is called on file-backed mappings (e.g., executable
>> text sections), the pages may still be dirty from recent writes and
>> cause collapse to fail with -EINVAL. This is particularly problematic
>> for freshly copied executables on filesystems, where page cache folios
>> remain dirty until background writeback completes.
>>
>> The current code in collapse_file() triggers async writeback via
>> filemap_flush() and expects khugepaged to revisit the page later.
>> However, MADV_COLLAPSE is a synchronous operation where userspace
>> expects immediate results.
>>
>> Perform synchronous writeback in madvise_collapse() before attempting
>> collapse to avoid failing on first attempt.
> 
> Thanks!
> 
>>
>> Reported-by: Branden Moore <Branden.Moore@amd.com>
>> Closes: https://lore.kernel.org/all/4e26fe5e-7374-467c-a333-9dd48f85d7cc@amd.com
>> Fixes: 34488399fa08 ("mm/madvise: add file and shmem support to MADV_COLLAPSE")
>> Suggested-by: David Hildenbrand <david@kernel.org>
>> Signed-off-by: Shivank Garg <shivankg@amd.com>
>> ---
>>   mm/khugepaged.c | 26 ++++++++++++++++++++++++++
>>   1 file changed, 26 insertions(+)
>>
>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>> index 97d1b2824386..066a332c76ad 100644
>> --- a/mm/khugepaged.c
>> +++ b/mm/khugepaged.c
>> @@ -22,6 +22,7 @@
>>   #include <linux/dax.h>
>>   #include <linux/ksm.h>
>>   #include <linux/pgalloc.h>
>> +#include <linux/backing-dev.h>
>>     #include <asm/tlb.h>
>>   #include "internal.h"
>> @@ -2784,6 +2785,31 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start,
>>       hstart = (start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK;
>>       hend = end & HPAGE_PMD_MASK;
>>   +    /*
>> +     * For file-backed VMAs, perform synchronous writeback to ensure
>> +     * dirty folios are flushed before attempting collapse. This avoids
>> +     * failing on the first attempt when freshly-written executable text
>> +     * is still dirty in the page cache.
>> +     */
>> +    if (!vma_is_anonymous(vma) && vma->vm_file) {
>> +        struct address_space *mapping = vma->vm_file->f_mapping;
>> +
>> +        if (mapping_can_writeback(mapping)) {
>> +            pgoff_t pgoff_start = linear_page_index(vma, hstart);
>> +            pgoff_t pgoff_end = linear_page_index(vma, hend);
>> +            loff_t lstart = (loff_t)pgoff_start << PAGE_SHIFT;
>> +            loff_t lend = ((loff_t)pgoff_end << PAGE_SHIFT) - 1;
> 
> It looks like we need to hold a reference to the file here before
> dropping the mmap lock :)
> 
>             file = get_file(vma->vm_file);
> 
> Without it, the vma could be destroyed by a concurrent munmap() while
> we are waiting in filemap_write_and_wait_range(), leading to a UAF
> on mapping, IIUC ...

Excellent catch!
Thanks for saving me from this nasty bug. I'll be more careful on file ref
handling in next version.

Best Regards,
Shivank
> 
>> +
>> +            mmap_read_unlock(mm);
>> +            mmap_locked = false;
>> +
>> +            if (filemap_write_and_wait_range(mapping, lstart, lend)) {
> 
> And drop the reference :)
> 
>                 fput(file);
> 
> 
>> +                last_fail = SCAN_FAIL;
>> +                goto out_maybelock;
>> +            }
> 
> Same here :)
> 
>             fput(file);
> 
> 
>> +        }
>> +    }
>> +
>>       for (addr = hstart; addr < hend; addr += HPAGE_PMD_SIZE) {
>>           int result = SCAN_FAIL;
>>   
> 
> Cheers,
> Lance