[PATCH 2/2] btrfs: prevent direct reclaim during compressed readahead

JP Kobryn (Meta) posted 2 patches 2 weeks, 3 days ago
There is a newer version of this series
[PATCH 2/2] btrfs: prevent direct reclaim during compressed readahead
Posted by JP Kobryn (Meta) 2 weeks, 3 days ago
Prevent direct reclaim during compressed readahead. This is achieved by
passing specific GFP flags whenever the bio is marked for readahead. The
flags are similar to GFP_NOFS but stripped of __GFP_DIRECT_RECLAIM. Also,
__GFP_NOWARN is added since these allocations are allowed to fail. Demand
reads still use full GFP_NOFS and will enter reclaim if needed.

btrfs_submit_compressed_read() now makes use of the new gfp_t API for
allocations within. Since non-readahead code may call this function, the
bio flags are inspected to determine whether direct reclaim should be
restricted or not.

add_ra_bio_pages() gains a bool parameter which allows callers to specify
if they want to allow direct reclaim or not. In either case, the NOWARN
flag was added unconditionally since the allocations are speculative.

Signed-off-by: JP Kobryn (Meta) <jp.kobryn@linux.dev>
---
 fs/btrfs/compression.c | 33 ++++++++++++++++++++++++++++-----
 1 file changed, 28 insertions(+), 5 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index ae9cb5b7676c..f32cfc933bee 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -372,7 +372,8 @@ struct compressed_bio *btrfs_alloc_compressed_write(struct btrfs_inode *inode,
 static noinline int add_ra_bio_pages(struct inode *inode,
 				     u64 compressed_end,
 				     struct compressed_bio *cb,
-				     int *memstall, unsigned long *pflags)
+				     int *memstall, unsigned long *pflags,
+				     bool direct_reclaim)
 {
 	struct btrfs_fs_info *fs_info = inode_to_fs_info(inode);
 	pgoff_t end_index;
@@ -380,6 +381,7 @@ static noinline int add_ra_bio_pages(struct inode *inode,
 	u64 cur = cb->orig_bbio->file_offset + orig_bio->bi_iter.bi_size;
 	u64 isize = i_size_read(inode);
 	int ret;
+	gfp_t constraint_gfp, cache_gfp;
 	struct folio *folio;
 	struct extent_map *em;
 	struct address_space *mapping = inode->i_mapping;
@@ -409,6 +411,14 @@ static noinline int add_ra_bio_pages(struct inode *inode,
 
 	end_index = (i_size_read(inode) - 1) >> PAGE_SHIFT;
 
+	if (!direct_reclaim) {
+		constraint_gfp = ~(__GFP_FS | __GFP_DIRECT_RECLAIM);
+		cache_gfp = (GFP_NOFS & ~__GFP_DIRECT_RECLAIM) | __GFP_NOWARN;
+	} else {
+		constraint_gfp = ~__GFP_FS;
+		cache_gfp = GFP_NOFS | __GFP_NOWARN;
+	}
+
 	while (cur < compressed_end) {
 		pgoff_t page_end;
 		pgoff_t pg_index = cur >> PAGE_SHIFT;
@@ -438,12 +448,13 @@ static noinline int add_ra_bio_pages(struct inode *inode,
 			continue;
 		}
 
-		folio = filemap_alloc_folio(mapping_gfp_constraint(mapping, ~__GFP_FS),
+		folio = filemap_alloc_folio(mapping_gfp_constraint(mapping,
+					    constraint_gfp) | __GFP_NOWARN,
 					    0, NULL);
 		if (!folio)
 			break;
 
-		if (filemap_add_folio(mapping, folio, pg_index, GFP_NOFS)) {
+		if (filemap_add_folio(mapping, folio, pg_index, cache_gfp)) {
 			/* There is already a page, skip to page end */
 			cur += folio_size(folio);
 			folio_put(folio);
@@ -536,6 +547,7 @@ void btrfs_submit_compressed_read(struct btrfs_bio *bbio)
 	unsigned int compressed_len;
 	const u32 min_folio_size = btrfs_min_folio_size(fs_info);
 	u64 file_offset = bbio->file_offset;
+	gfp_t gfp;
 	u64 em_len;
 	u64 em_start;
 	struct extent_map *em;
@@ -543,6 +555,17 @@ void btrfs_submit_compressed_read(struct btrfs_bio *bbio)
 	int memstall = 0;
 	int ret;
 
+	/*
+	 * If this is a readahead bio, prevent direct reclaim. This is done to
+	 * avoid stalling on speculative allocations when memory pressure is
+	 * high. The demand fault will retry with GFP_NOFS and enter direct
+	 * reclaim if needed.
+	 */
+	if (bbio->bio.bi_opf & REQ_RAHEAD)
+		gfp = (GFP_NOFS & ~__GFP_DIRECT_RECLAIM) | __GFP_NOWARN;
+	else
+		gfp = GFP_NOFS;
+
 	/* we need the actual starting offset of this extent in the file */
 	read_lock(&em_tree->lock);
 	em = btrfs_lookup_extent_mapping(em_tree, file_offset, fs_info->sectorsize);
@@ -573,7 +596,7 @@ void btrfs_submit_compressed_read(struct btrfs_bio *bbio)
 		struct folio *folio;
 		u32 cur_len = min(compressed_len - i * min_folio_size, min_folio_size);
 
-		folio = btrfs_alloc_compr_folio(fs_info);
+		folio = btrfs_alloc_compr_folio_gfp(fs_info, gfp);
 		if (!folio) {
 			ret = -ENOMEM;
 			goto out_free_bio;
@@ -589,7 +612,7 @@ void btrfs_submit_compressed_read(struct btrfs_bio *bbio)
 	ASSERT(cb->bbio.bio.bi_iter.bi_size == compressed_len);
 
 	add_ra_bio_pages(&inode->vfs_inode, em_start + em_len, cb, &memstall,
-			 &pflags);
+			 &pflags, !(bbio->bio.bi_opf & REQ_RAHEAD));
 
 	cb->len = bbio->bio.bi_iter.bi_size;
 	cb->bbio.bio.bi_iter.bi_sector = bbio->bio.bi_iter.bi_sector;
-- 
2.52.0
Re: [PATCH 2/2] btrfs: prevent direct reclaim during compressed readahead
Posted by Mark Harmstone 2 weeks, 2 days ago
Reviewed-by: Mark Harmstone <mark@harmstone.com>

On 20/03/2026 7.34 am, JP Kobryn (Meta) wrote:
> Prevent direct reclaim during compressed readahead. This is achieved by
> passing specific GFP flags whenever the bio is marked for readahead. The
> flags are similar to GFP_NOFS but stripped of __GFP_DIRECT_RECLAIM. Also,
> __GFP_NOWARN is added since these allocations are allowed to fail. Demand
> reads still use full GFP_NOFS and will enter reclaim if needed.
> 
> btrfs_submit_compressed_read() now makes use of the new gfp_t API for
> allocations within. Since non-readahead code may call this function, the
> bio flags are inspected to determine whether direct reclaim should be
> restricted or not.
> 
> add_ra_bio_pages() gains a bool parameter which allows callers to specify
> if they want to allow direct reclaim or not. In either case, the NOWARN
> flag was added unconditionally since the allocations are speculative.
> 
> Signed-off-by: JP Kobryn (Meta) <jp.kobryn@linux.dev>
> ---
>   fs/btrfs/compression.c | 33 ++++++++++++++++++++++++++++-----
>   1 file changed, 28 insertions(+), 5 deletions(-)
> 
> diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
> index ae9cb5b7676c..f32cfc933bee 100644
> --- a/fs/btrfs/compression.c
> +++ b/fs/btrfs/compression.c
> @@ -372,7 +372,8 @@ struct compressed_bio *btrfs_alloc_compressed_write(struct btrfs_inode *inode,
>   static noinline int add_ra_bio_pages(struct inode *inode,
>   				     u64 compressed_end,
>   				     struct compressed_bio *cb,
> -				     int *memstall, unsigned long *pflags)
> +				     int *memstall, unsigned long *pflags,
> +				     bool direct_reclaim)
>   {
>   	struct btrfs_fs_info *fs_info = inode_to_fs_info(inode);
>   	pgoff_t end_index;
> @@ -380,6 +381,7 @@ static noinline int add_ra_bio_pages(struct inode *inode,
>   	u64 cur = cb->orig_bbio->file_offset + orig_bio->bi_iter.bi_size;
>   	u64 isize = i_size_read(inode);
>   	int ret;
> +	gfp_t constraint_gfp, cache_gfp;
>   	struct folio *folio;
>   	struct extent_map *em;
>   	struct address_space *mapping = inode->i_mapping;
> @@ -409,6 +411,14 @@ static noinline int add_ra_bio_pages(struct inode *inode,
>   
>   	end_index = (i_size_read(inode) - 1) >> PAGE_SHIFT;
>   
> +	if (!direct_reclaim) {
> +		constraint_gfp = ~(__GFP_FS | __GFP_DIRECT_RECLAIM);
> +		cache_gfp = (GFP_NOFS & ~__GFP_DIRECT_RECLAIM) | __GFP_NOWARN;
> +	} else {
> +		constraint_gfp = ~__GFP_FS;
> +		cache_gfp = GFP_NOFS | __GFP_NOWARN;
> +	}
> +
>   	while (cur < compressed_end) {
>   		pgoff_t page_end;
>   		pgoff_t pg_index = cur >> PAGE_SHIFT;
> @@ -438,12 +448,13 @@ static noinline int add_ra_bio_pages(struct inode *inode,
>   			continue;
>   		}
>   
> -		folio = filemap_alloc_folio(mapping_gfp_constraint(mapping, ~__GFP_FS),
> +		folio = filemap_alloc_folio(mapping_gfp_constraint(mapping,
> +					    constraint_gfp) | __GFP_NOWARN,
>   					    0, NULL);
>   		if (!folio)
>   			break;
>   
> -		if (filemap_add_folio(mapping, folio, pg_index, GFP_NOFS)) {
> +		if (filemap_add_folio(mapping, folio, pg_index, cache_gfp)) {
>   			/* There is already a page, skip to page end */
>   			cur += folio_size(folio);
>   			folio_put(folio);
> @@ -536,6 +547,7 @@ void btrfs_submit_compressed_read(struct btrfs_bio *bbio)
>   	unsigned int compressed_len;
>   	const u32 min_folio_size = btrfs_min_folio_size(fs_info);
>   	u64 file_offset = bbio->file_offset;
> +	gfp_t gfp;
>   	u64 em_len;
>   	u64 em_start;
>   	struct extent_map *em;
> @@ -543,6 +555,17 @@ void btrfs_submit_compressed_read(struct btrfs_bio *bbio)
>   	int memstall = 0;
>   	int ret;
>   
> +	/*
> +	 * If this is a readahead bio, prevent direct reclaim. This is done to
> +	 * avoid stalling on speculative allocations when memory pressure is
> +	 * high. The demand fault will retry with GFP_NOFS and enter direct
> +	 * reclaim if needed.
> +	 */
> +	if (bbio->bio.bi_opf & REQ_RAHEAD)
> +		gfp = (GFP_NOFS & ~__GFP_DIRECT_RECLAIM) | __GFP_NOWARN;
> +	else
> +		gfp = GFP_NOFS;
> +
>   	/* we need the actual starting offset of this extent in the file */
>   	read_lock(&em_tree->lock);
>   	em = btrfs_lookup_extent_mapping(em_tree, file_offset, fs_info->sectorsize);
> @@ -573,7 +596,7 @@ void btrfs_submit_compressed_read(struct btrfs_bio *bbio)
>   		struct folio *folio;
>   		u32 cur_len = min(compressed_len - i * min_folio_size, min_folio_size);
>   
> -		folio = btrfs_alloc_compr_folio(fs_info);
> +		folio = btrfs_alloc_compr_folio_gfp(fs_info, gfp);
>   		if (!folio) {
>   			ret = -ENOMEM;
>   			goto out_free_bio;
> @@ -589,7 +612,7 @@ void btrfs_submit_compressed_read(struct btrfs_bio *bbio)
>   	ASSERT(cb->bbio.bio.bi_iter.bi_size == compressed_len);
>   
>   	add_ra_bio_pages(&inode->vfs_inode, em_start + em_len, cb, &memstall,
> -			 &pflags);
> +			 &pflags, !(bbio->bio.bi_opf & REQ_RAHEAD));
>   
>   	cb->len = bbio->bio.bi_iter.bi_size;
>   	cb->bbio.bio.bi_iter.bi_sector = bbio->bio.bi_iter.bi_sector;
Re: [PATCH 2/2] btrfs: prevent direct reclaim during compressed readahead
Posted by Qu Wenruo 2 weeks, 2 days ago

在 2026/3/20 18:04, JP Kobryn (Meta) 写道:
> Prevent direct reclaim during compressed readahead. This is achieved by
> passing specific GFP flags whenever the bio is marked for readahead. The
> flags are similar to GFP_NOFS but stripped of __GFP_DIRECT_RECLAIM. Also,
> __GFP_NOWARN is added since these allocations are allowed to fail. Demand
> reads still use full GFP_NOFS and will enter reclaim if needed.

I believe it will be more convincing to explain why the current gfp 
flags is going to cause the problem you mentioned in the cover letter.

> 
> btrfs_submit_compressed_read() now makes use of the new gfp_t API for
> allocations within. Since non-readahead code may call this function, the
> bio flags are inspected to determine whether direct reclaim should be
> restricted or not.
> 
> add_ra_bio_pages() gains a bool parameter which allows callers to specify
> if they want to allow direct reclaim or not. In either case, the NOWARN
> flag was added unconditionally since the allocations are speculative.

After reading the code, I have a feeling that, we shouldn't act on the 
behalf of MM layer to add the next few folios into the page cache.

On the other hand, with the incoming large folios, we will completely 
skip the readahead for large folios.

I know this is not optimal as the next few folios may still belong to 
the same compressed extent and will cause re-read and re-decompression.

Thus I'm wondering, for your specific workload, will disabling 
compressed ra completely and fully rely on large folios help?

If the performance is acceptable, I'd prefer to disable compressed 
readahead completely and rely on large folios instead.

(Now I understand why other fses with compression support is completely 
relying on fixed IO size)

Thanks,
Qu


> 
> Signed-off-by: JP Kobryn (Meta) <jp.kobryn@linux.dev>
> ---
>   fs/btrfs/compression.c | 33 ++++++++++++++++++++++++++++-----
>   1 file changed, 28 insertions(+), 5 deletions(-)
> 
> diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
> index ae9cb5b7676c..f32cfc933bee 100644
> --- a/fs/btrfs/compression.c
> +++ b/fs/btrfs/compression.c
> @@ -372,7 +372,8 @@ struct compressed_bio *btrfs_alloc_compressed_write(struct btrfs_inode *inode,
>   static noinline int add_ra_bio_pages(struct inode *inode,
>   				     u64 compressed_end,
>   				     struct compressed_bio *cb,
> -				     int *memstall, unsigned long *pflags)
> +				     int *memstall, unsigned long *pflags,
> +				     bool direct_reclaim)
>   {
>   	struct btrfs_fs_info *fs_info = inode_to_fs_info(inode);
>   	pgoff_t end_index;
> @@ -380,6 +381,7 @@ static noinline int add_ra_bio_pages(struct inode *inode,
>   	u64 cur = cb->orig_bbio->file_offset + orig_bio->bi_iter.bi_size;
>   	u64 isize = i_size_read(inode);
>   	int ret;
> +	gfp_t constraint_gfp, cache_gfp;
>   	struct folio *folio;
>   	struct extent_map *em;
>   	struct address_space *mapping = inode->i_mapping;
> @@ -409,6 +411,14 @@ static noinline int add_ra_bio_pages(struct inode *inode,
>   
>   	end_index = (i_size_read(inode) - 1) >> PAGE_SHIFT;
>   
> +	if (!direct_reclaim) {
> +		constraint_gfp = ~(__GFP_FS | __GFP_DIRECT_RECLAIM);
> +		cache_gfp = (GFP_NOFS & ~__GFP_DIRECT_RECLAIM) | __GFP_NOWARN;
> +	} else {
> +		constraint_gfp = ~__GFP_FS;
> +		cache_gfp = GFP_NOFS | __GFP_NOWARN;
> +	}
> +
>   	while (cur < compressed_end) {
>   		pgoff_t page_end;
>   		pgoff_t pg_index = cur >> PAGE_SHIFT;
> @@ -438,12 +448,13 @@ static noinline int add_ra_bio_pages(struct inode *inode,
>   			continue;
>   		}
>   
> -		folio = filemap_alloc_folio(mapping_gfp_constraint(mapping, ~__GFP_FS),
> +		folio = filemap_alloc_folio(mapping_gfp_constraint(mapping,
> +					    constraint_gfp) | __GFP_NOWARN,
>   					    0, NULL);
>   		if (!folio)
>   			break;
>   
> -		if (filemap_add_folio(mapping, folio, pg_index, GFP_NOFS)) {
> +		if (filemap_add_folio(mapping, folio, pg_index, cache_gfp)) {
>   			/* There is already a page, skip to page end */
>   			cur += folio_size(folio);
>   			folio_put(folio);
> @@ -536,6 +547,7 @@ void btrfs_submit_compressed_read(struct btrfs_bio *bbio)
>   	unsigned int compressed_len;
>   	const u32 min_folio_size = btrfs_min_folio_size(fs_info);
>   	u64 file_offset = bbio->file_offset;
> +	gfp_t gfp;
>   	u64 em_len;
>   	u64 em_start;
>   	struct extent_map *em;
> @@ -543,6 +555,17 @@ void btrfs_submit_compressed_read(struct btrfs_bio *bbio)
>   	int memstall = 0;
>   	int ret;
>   
> +	/*
> +	 * If this is a readahead bio, prevent direct reclaim. This is done to
> +	 * avoid stalling on speculative allocations when memory pressure is
> +	 * high. The demand fault will retry with GFP_NOFS and enter direct
> +	 * reclaim if needed.
> +	 */
> +	if (bbio->bio.bi_opf & REQ_RAHEAD)
> +		gfp = (GFP_NOFS & ~__GFP_DIRECT_RECLAIM) | __GFP_NOWARN;
> +	else
> +		gfp = GFP_NOFS;
> +
>   	/* we need the actual starting offset of this extent in the file */
>   	read_lock(&em_tree->lock);
>   	em = btrfs_lookup_extent_mapping(em_tree, file_offset, fs_info->sectorsize);
> @@ -573,7 +596,7 @@ void btrfs_submit_compressed_read(struct btrfs_bio *bbio)
>   		struct folio *folio;
>   		u32 cur_len = min(compressed_len - i * min_folio_size, min_folio_size);
>   
> -		folio = btrfs_alloc_compr_folio(fs_info);
> +		folio = btrfs_alloc_compr_folio_gfp(fs_info, gfp);
>   		if (!folio) {
>   			ret = -ENOMEM;
>   			goto out_free_bio;
> @@ -589,7 +612,7 @@ void btrfs_submit_compressed_read(struct btrfs_bio *bbio)
>   	ASSERT(cb->bbio.bio.bi_iter.bi_size == compressed_len);
>   
>   	add_ra_bio_pages(&inode->vfs_inode, em_start + em_len, cb, &memstall,
> -			 &pflags);
> +			 &pflags, !(bbio->bio.bi_opf & REQ_RAHEAD));
>   
>   	cb->len = bbio->bio.bi_iter.bi_size;
>   	cb->bbio.bio.bi_iter.bi_sector = bbio->bio.bi_iter.bi_sector;

Re: [PATCH 2/2] btrfs: prevent direct reclaim during compressed readahead
Posted by Mark Harmstone 2 weeks, 2 days ago
On 20/03/2026 10.12 am, Qu Wenruo wrote:
> 
> 
> 在 2026/3/20 18:04, JP Kobryn (Meta) 写道:
>> Prevent direct reclaim during compressed readahead. This is achieved by
>> passing specific GFP flags whenever the bio is marked for readahead. The
>> flags are similar to GFP_NOFS but stripped of __GFP_DIRECT_RECLAIM. Also,
>> __GFP_NOWARN is added since these allocations are allowed to fail. Demand
>> reads still use full GFP_NOFS and will enter reclaim if needed.
> 
> I believe it will be more convincing to explain why the current gfp 
> flags is going to cause the problem you mentioned in the cover letter.
> 
>>
>> btrfs_submit_compressed_read() now makes use of the new gfp_t API for
>> allocations within. Since non-readahead code may call this function, the
>> bio flags are inspected to determine whether direct reclaim should be
>> restricted or not.
>>
>> add_ra_bio_pages() gains a bool parameter which allows callers to specify
>> if they want to allow direct reclaim or not. In either case, the NOWARN
>> flag was added unconditionally since the allocations are speculative.
> 
> After reading the code, I have a feeling that, we shouldn't act on the 
> behalf of MM layer to add the next few folios into the page cache.

Your idea might have merit, but this is a quick fix for a problem that 
JP has seen in production. Reworking the whole thing might be the 
ultimate solution, but that's much riskier, and will take more testing, 
than the proposed change.

> On the other hand, with the incoming large folios, we will completely 
> skip the readahead for large folios.
> 
> I know this is not optimal as the next few folios may still belong to 
> the same compressed extent and will cause re-read and re-decompression.
> 
> Thus I'm wondering, for your specific workload, will disabling 
> compressed ra completely and fully rely on large folios help?
> 
> If the performance is acceptable, I'd prefer to disable compressed 
> readahead completely and rely on large folios instead.
> 
> (Now I understand why other fses with compression support is completely 
> relying on fixed IO size)
> 
> Thanks,
> Qu
> 
> 
>>
>> Signed-off-by: JP Kobryn (Meta) <jp.kobryn@linux.dev>
>> ---
>>   fs/btrfs/compression.c | 33 ++++++++++++++++++++++++++++-----
>>   1 file changed, 28 insertions(+), 5 deletions(-)
>>
>> diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
>> index ae9cb5b7676c..f32cfc933bee 100644
>> --- a/fs/btrfs/compression.c
>> +++ b/fs/btrfs/compression.c
>> @@ -372,7 +372,8 @@ struct compressed_bio 
>> *btrfs_alloc_compressed_write(struct btrfs_inode *inode,
>>   static noinline int add_ra_bio_pages(struct inode *inode,
>>                        u64 compressed_end,
>>                        struct compressed_bio *cb,
>> -                     int *memstall, unsigned long *pflags)
>> +                     int *memstall, unsigned long *pflags,
>> +                     bool direct_reclaim)
>>   {
>>       struct btrfs_fs_info *fs_info = inode_to_fs_info(inode);
>>       pgoff_t end_index;
>> @@ -380,6 +381,7 @@ static noinline int add_ra_bio_pages(struct inode 
>> *inode,
>>       u64 cur = cb->orig_bbio->file_offset + orig_bio->bi_iter.bi_size;
>>       u64 isize = i_size_read(inode);
>>       int ret;
>> +    gfp_t constraint_gfp, cache_gfp;
>>       struct folio *folio;
>>       struct extent_map *em;
>>       struct address_space *mapping = inode->i_mapping;
>> @@ -409,6 +411,14 @@ static noinline int add_ra_bio_pages(struct inode 
>> *inode,
>>       end_index = (i_size_read(inode) - 1) >> PAGE_SHIFT;
>> +    if (!direct_reclaim) {
>> +        constraint_gfp = ~(__GFP_FS | __GFP_DIRECT_RECLAIM);
>> +        cache_gfp = (GFP_NOFS & ~__GFP_DIRECT_RECLAIM) | __GFP_NOWARN;
>> +    } else {
>> +        constraint_gfp = ~__GFP_FS;
>> +        cache_gfp = GFP_NOFS | __GFP_NOWARN;
>> +    }
>> +
>>       while (cur < compressed_end) {
>>           pgoff_t page_end;
>>           pgoff_t pg_index = cur >> PAGE_SHIFT;
>> @@ -438,12 +448,13 @@ static noinline int add_ra_bio_pages(struct 
>> inode *inode,
>>               continue;
>>           }
>> -        folio = filemap_alloc_folio(mapping_gfp_constraint(mapping, 
>> ~__GFP_FS),
>> +        folio = filemap_alloc_folio(mapping_gfp_constraint(mapping,
>> +                        constraint_gfp) | __GFP_NOWARN,
>>                           0, NULL);
>>           if (!folio)
>>               break;
>> -        if (filemap_add_folio(mapping, folio, pg_index, GFP_NOFS)) {
>> +        if (filemap_add_folio(mapping, folio, pg_index, cache_gfp)) {
>>               /* There is already a page, skip to page end */
>>               cur += folio_size(folio);
>>               folio_put(folio);
>> @@ -536,6 +547,7 @@ void btrfs_submit_compressed_read(struct btrfs_bio 
>> *bbio)
>>       unsigned int compressed_len;
>>       const u32 min_folio_size = btrfs_min_folio_size(fs_info);
>>       u64 file_offset = bbio->file_offset;
>> +    gfp_t gfp;
>>       u64 em_len;
>>       u64 em_start;
>>       struct extent_map *em;
>> @@ -543,6 +555,17 @@ void btrfs_submit_compressed_read(struct 
>> btrfs_bio *bbio)
>>       int memstall = 0;
>>       int ret;
>> +    /*
>> +     * If this is a readahead bio, prevent direct reclaim. This is 
>> done to
>> +     * avoid stalling on speculative allocations when memory pressure is
>> +     * high. The demand fault will retry with GFP_NOFS and enter direct
>> +     * reclaim if needed.
>> +     */
>> +    if (bbio->bio.bi_opf & REQ_RAHEAD)
>> +        gfp = (GFP_NOFS & ~__GFP_DIRECT_RECLAIM) | __GFP_NOWARN;
>> +    else
>> +        gfp = GFP_NOFS;
>> +
>>       /* we need the actual starting offset of this extent in the file */
>>       read_lock(&em_tree->lock);
>>       em = btrfs_lookup_extent_mapping(em_tree, file_offset, fs_info- 
>> >sectorsize);
>> @@ -573,7 +596,7 @@ void btrfs_submit_compressed_read(struct btrfs_bio 
>> *bbio)
>>           struct folio *folio;
>>           u32 cur_len = min(compressed_len - i * min_folio_size, 
>> min_folio_size);
>> -        folio = btrfs_alloc_compr_folio(fs_info);
>> +        folio = btrfs_alloc_compr_folio_gfp(fs_info, gfp);
>>           if (!folio) {
>>               ret = -ENOMEM;
>>               goto out_free_bio;
>> @@ -589,7 +612,7 @@ void btrfs_submit_compressed_read(struct btrfs_bio 
>> *bbio)
>>       ASSERT(cb->bbio.bio.bi_iter.bi_size == compressed_len);
>>       add_ra_bio_pages(&inode->vfs_inode, em_start + em_len, cb, 
>> &memstall,
>> -             &pflags);
>> +             &pflags, !(bbio->bio.bi_opf & REQ_RAHEAD));
>>       cb->len = bbio->bio.bi_iter.bi_size;
>>       cb->bbio.bio.bi_iter.bi_sector = bbio->bio.bi_iter.bi_sector;
> 

Re: [PATCH 2/2] btrfs: prevent direct reclaim during compressed readahead
Posted by Christoph Hellwig 2 weeks, 3 days ago
On Fri, Mar 20, 2026 at 12:34:45AM -0700, JP Kobryn (Meta) wrote:
> Prevent direct reclaim during compressed readahead.

This completely fails to explain why you want that.
Re: [PATCH 2/2] btrfs: prevent direct reclaim during compressed readahead
Posted by JP Kobryn (Meta) 2 weeks, 2 days ago
On 3/20/26 12:36 AM, Christoph Hellwig wrote:
> On Fri, Mar 20, 2026 at 12:34:45AM -0700, JP Kobryn (Meta) wrote:
>> Prevent direct reclaim during compressed readahead.
> 
> This completely fails to explain why you want that.

I see that now.

Qu also had a good point about including info on why the current flags
are an issue. Some of the cover letter text would help here. I'll bring
some of that over and expand on it so we have the relevant details here.