[PATCH] btrfs: guard against missing private state in lock_delalloc_folios()

JP Kobryn posted 1 patch 1 week ago
fs/btrfs/extent_io.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
[PATCH] btrfs: guard against missing private state in lock_delalloc_folios()
Posted by JP Kobryn 1 week ago
Users of filemap_lock_folio() need to guard against the situation where
release_folio() has been invoked during reclaim but the folio was
ultimately not removed from the page cache. This patch covers one location
which may have been overlooked.

After acquiring the folio, use set_folio_extent_mapped() to ensure the
folio private state is valid. This is especially important in the subpage
case, where the private field is an allocated struct containing bitmap and
lock data.

Failing calls (with -ENOMEM) are treated as transient errors and execution
will follow the existing "try again" path.

Signed-off-by: JP Kobryn <inwardvessel@gmail.com>
---
 fs/btrfs/extent_io.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 3df399dc8856..573b29f62bc1 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -332,6 +332,18 @@ static noinline int lock_delalloc_folios(struct inode *inode,
 				folio_unlock(folio);
 				goto out;
 			}
+
+			/*
+			 * release_folio() could have cleared the folio private data
+			 * while we were not holding the lock.
+			 * Reset the mapping if needed so subpage operations can access
+			 * a valid private folio state.
+			 */
+			if (set_folio_extent_mapped(folio)) {
+				folio_unlock(folio);
+				goto out;
+			}
+
 			range_start = max_t(u64, folio_pos(folio), start);
 			range_len = min_t(u64, folio_next_pos(folio), end + 1) - range_start;
 			btrfs_folio_set_lock(fs_info, folio, range_start, range_len);
-- 
2.52.0
Re: [PATCH] btrfs: guard against missing private state in lock_delalloc_folios()
Posted by Qu Wenruo 1 week ago

在 2026/1/31 12:04, JP Kobryn 写道:
> Users of filemap_lock_folio() need to guard against the situation where
> release_folio() has been invoked during reclaim but the folio was
> ultimately not removed from the page cache. This patch covers one location
> which may have been overlooked.
> 
> After acquiring the folio, use set_folio_extent_mapped() to ensure the
> folio private state is valid. This is especially important in the subpage
> case, where the private field is an allocated struct containing bitmap and
> lock data.
> 
> Failing calls (with -ENOMEM) are treated as transient errors and execution
> will follow the existing "try again" path.
> 
> Signed-off-by: JP Kobryn <inwardvessel@gmail.com>
> ---
>   fs/btrfs/extent_io.c | 12 ++++++++++++
>   1 file changed, 12 insertions(+)
> 
> diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
> index 3df399dc8856..573b29f62bc1 100644
> --- a/fs/btrfs/extent_io.c
> +++ b/fs/btrfs/extent_io.c
> @@ -332,6 +332,18 @@ static noinline int lock_delalloc_folios(struct inode *inode,
>   				folio_unlock(folio);
>   				goto out;
>   			}
> +
> +			/*
> +			 * release_folio() could have cleared the folio private data
> +			 * while we were not holding the lock.
> +			 * Reset the mapping if needed so subpage operations can access
> +			 * a valid private folio state.
> +			 */
> +			if (set_folio_extent_mapped(folio)) {
> +				folio_unlock(folio);
> +				goto out;
> +			}

If the folio is released meaning it will not have dirty flag.
Then the above folio_test_dirty() should be triggered and exit with 
-EAGAIN. We will re-search the extent io tree to re-grab a proper 
delalloc range.

And if the folio is still dirty, it means it must still have private set.

Thus I'm afraid this check is a little over-killed.

Thanks,
Qu

> +
>   			range_start = max_t(u64, folio_pos(folio), start);
>   			range_len = min_t(u64, folio_next_pos(folio), end + 1) - range_start;
>   			btrfs_folio_set_lock(fs_info, folio, range_start, range_len);

Re: [PATCH] btrfs: guard against missing private state in lock_delalloc_folios()
Posted by JP Kobryn 6 days, 19 hours ago
On 1/30/26 6:15 PM, Qu Wenruo wrote:
> 
> 
> 在 2026/1/31 12:04, JP Kobryn 写道:
>> Users of filemap_lock_folio() need to guard against the situation where
>> release_folio() has been invoked during reclaim but the folio was
>> ultimately not removed from the page cache. This patch covers one 
>> location
>> which may have been overlooked.
>>
>> After acquiring the folio, use set_folio_extent_mapped() to ensure the
>> folio private state is valid. This is especially important in the subpage
>> case, where the private field is an allocated struct containing bitmap 
>> and
>> lock data.
>>
>> Failing calls (with -ENOMEM) are treated as transient errors and 
>> execution
>> will follow the existing "try again" path.
>>
>> Signed-off-by: JP Kobryn <inwardvessel@gmail.com>
>> ---
>>   fs/btrfs/extent_io.c | 12 ++++++++++++
>>   1 file changed, 12 insertions(+)
>>
>> diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
>> index 3df399dc8856..573b29f62bc1 100644
>> --- a/fs/btrfs/extent_io.c
>> +++ b/fs/btrfs/extent_io.c
>> @@ -332,6 +332,18 @@ static noinline int lock_delalloc_folios(struct 
>> inode *inode,
>>                   folio_unlock(folio);
>>                   goto out;
>>               }
>> +
>> +            /*
>> +             * release_folio() could have cleared the folio private data
>> +             * while we were not holding the lock.
>> +             * Reset the mapping if needed so subpage operations can 
>> access
>> +             * a valid private folio state.
>> +             */
>> +            if (set_folio_extent_mapped(folio)) {
>> +                folio_unlock(folio);
>> +                goto out;
>> +            }
> 
> If the folio is released meaning it will not have dirty flag.
> Then the above folio_test_dirty() should be triggered and exit with - 
> EAGAIN. We will re-search the extent io tree to re-grab a proper 
> delalloc range.
> 

Thanks for ruling that one out. It seems that there are no other
vulnerabilities in mainline at the moment. Stand by for one more patch
at a different location targeting the 6.12 stable tree.