[PATCH v2 10/16] ext4: fix largest free orders lists corruption on mb_optimize_scan switch

Baokun Li posted 16 patches 3 months, 2 weeks ago
There is a newer version of this series
[PATCH v2 10/16] ext4: fix largest free orders lists corruption on mb_optimize_scan switch
Posted by Baokun Li 3 months, 2 weeks ago
The grp->bb_largest_free_order is updated regardless of whether
mb_optimize_scan is enabled. This can lead to inconsistencies between
grp->bb_largest_free_order and the actual s_mb_largest_free_orders list
index when mb_optimize_scan is repeatedly enabled and disabled via remount.

For example, if mb_optimize_scan is initially enabled, largest free
order is 3, and the group is in s_mb_largest_free_orders[3]. Then,
mb_optimize_scan is disabled via remount, block allocations occur,
updating largest free order to 2. Finally, mb_optimize_scan is re-enabled
via remount, more block allocations update largest free order to 1.

At this point, the group would be removed from s_mb_largest_free_orders[3]
under the protection of s_mb_largest_free_orders_locks[2]. This lock
mismatch can lead to list corruption.

To fix this, a new field bb_largest_free_order_idx is added to struct
ext4_group_info to explicitly track the list index. Then still update
bb_largest_free_order unconditionally, but only update
bb_largest_free_order_idx when mb_optimize_scan is enabled. so that there
is no inconsistency between the lock and the data to be protected.

Fixes: 196e402adf2e ("ext4: improve cr 0 / cr 1 group scanning")
CC: stable@vger.kernel.org
Signed-off-by: Baokun Li <libaokun1@huawei.com>
---
 fs/ext4/ext4.h    |  1 +
 fs/ext4/mballoc.c | 35 ++++++++++++++++-------------------
 2 files changed, 17 insertions(+), 19 deletions(-)

diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
index 003b8d3726e8..0e574378c6a3 100644
--- a/fs/ext4/ext4.h
+++ b/fs/ext4/ext4.h
@@ -3476,6 +3476,7 @@ struct ext4_group_info {
 	int		bb_avg_fragment_size_order;	/* order of average
 							   fragment in BG */
 	ext4_grpblk_t	bb_largest_free_order;/* order of largest frag in BG */
+	ext4_grpblk_t	bb_largest_free_order_idx; /* index of largest frag */
 	ext4_group_t	bb_group;	/* Group number */
 	struct          list_head bb_prealloc_list;
 #ifdef DOUBLE_CHECK
diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
index e6d6c2da3c6e..dc82124f0905 100644
--- a/fs/ext4/mballoc.c
+++ b/fs/ext4/mballoc.c
@@ -1152,33 +1152,29 @@ static void
 mb_set_largest_free_order(struct super_block *sb, struct ext4_group_info *grp)
 {
 	struct ext4_sb_info *sbi = EXT4_SB(sb);
-	int i;
+	int new, old = grp->bb_largest_free_order_idx;
 
-	for (i = MB_NUM_ORDERS(sb) - 1; i >= 0; i--)
-		if (grp->bb_counters[i] > 0)
+	for (new = MB_NUM_ORDERS(sb) - 1; new >= 0; new--)
+		if (grp->bb_counters[new] > 0)
 			break;
+
+	grp->bb_largest_free_order = new;
 	/* No need to move between order lists? */
-	if (!test_opt2(sb, MB_OPTIMIZE_SCAN) ||
-	    i == grp->bb_largest_free_order) {
-		grp->bb_largest_free_order = i;
+	if (!test_opt2(sb, MB_OPTIMIZE_SCAN) || new == old)
 		return;
-	}
 
-	if (grp->bb_largest_free_order >= 0) {
-		write_lock(&sbi->s_mb_largest_free_orders_locks[
-					      grp->bb_largest_free_order]);
+	if (old >= 0) {
+		write_lock(&sbi->s_mb_largest_free_orders_locks[old]);
 		list_del_init(&grp->bb_largest_free_order_node);
-		write_unlock(&sbi->s_mb_largest_free_orders_locks[
-					      grp->bb_largest_free_order]);
+		write_unlock(&sbi->s_mb_largest_free_orders_locks[old]);
 	}
-	grp->bb_largest_free_order = i;
-	if (grp->bb_largest_free_order >= 0 && grp->bb_free) {
-		write_lock(&sbi->s_mb_largest_free_orders_locks[
-					      grp->bb_largest_free_order]);
+
+	grp->bb_largest_free_order_idx = new;
+	if (new >= 0 && grp->bb_free) {
+		write_lock(&sbi->s_mb_largest_free_orders_locks[new]);
 		list_add_tail(&grp->bb_largest_free_order_node,
-		      &sbi->s_mb_largest_free_orders[grp->bb_largest_free_order]);
-		write_unlock(&sbi->s_mb_largest_free_orders_locks[
-					      grp->bb_largest_free_order]);
+			      &sbi->s_mb_largest_free_orders[new]);
+		write_unlock(&sbi->s_mb_largest_free_orders_locks[new]);
 	}
 }
 
@@ -3391,6 +3387,7 @@ int ext4_mb_add_groupinfo(struct super_block *sb, ext4_group_t group,
 	INIT_LIST_HEAD(&meta_group_info[i]->bb_avg_fragment_size_node);
 	meta_group_info[i]->bb_largest_free_order = -1;  /* uninit */
 	meta_group_info[i]->bb_avg_fragment_size_order = -1;  /* uninit */
+	meta_group_info[i]->bb_largest_free_order_idx = -1;  /* uninit */
 	meta_group_info[i]->bb_group = group;
 
 	mb_group_bb_bitmap_alloc(sb, meta_group_info[i], group);
-- 
2.46.1
Re: [PATCH v2 10/16] ext4: fix largest free orders lists corruption on mb_optimize_scan switch
Posted by Jan Kara 3 months, 1 week ago
On Mon 23-06-25 15:32:58, Baokun Li wrote:
> The grp->bb_largest_free_order is updated regardless of whether
> mb_optimize_scan is enabled. This can lead to inconsistencies between
> grp->bb_largest_free_order and the actual s_mb_largest_free_orders list
> index when mb_optimize_scan is repeatedly enabled and disabled via remount.
> 
> For example, if mb_optimize_scan is initially enabled, largest free
> order is 3, and the group is in s_mb_largest_free_orders[3]. Then,
> mb_optimize_scan is disabled via remount, block allocations occur,
> updating largest free order to 2. Finally, mb_optimize_scan is re-enabled
> via remount, more block allocations update largest free order to 1.
> 
> At this point, the group would be removed from s_mb_largest_free_orders[3]
> under the protection of s_mb_largest_free_orders_locks[2]. This lock
> mismatch can lead to list corruption.
> 
> To fix this, a new field bb_largest_free_order_idx is added to struct
> ext4_group_info to explicitly track the list index. Then still update
> bb_largest_free_order unconditionally, but only update
> bb_largest_free_order_idx when mb_optimize_scan is enabled. so that there
> is no inconsistency between the lock and the data to be protected.
> 
> Fixes: 196e402adf2e ("ext4: improve cr 0 / cr 1 group scanning")
> CC: stable@vger.kernel.org
> Signed-off-by: Baokun Li <libaokun1@huawei.com>

Hum, rather than duplicating index like this, couldn't we add to
mb_set_largest_free_order():

	/* Did mb_optimize_scan setting change? */
	if (!test_opt2(sb, MB_OPTIMIZE_SCAN) &&
	    !list_empty(&grp->bb_largest_free_order_node)) {
		write_lock(&sbi->s_mb_largest_free_orders_locks[old]);
		list_del_init(&grp->bb_largest_free_order_node);
		write_unlock(&sbi->s_mb_largest_free_orders_locks[old]);
	}

Also arguably we should reinit bb lists when mb_optimize_scan gets
reenabled because otherwise inconsistent lists could lead to suboptimal
results... But that's less important to fix I guess.

								Honza

> ---
>  fs/ext4/ext4.h    |  1 +
>  fs/ext4/mballoc.c | 35 ++++++++++++++++-------------------
>  2 files changed, 17 insertions(+), 19 deletions(-)
> 
> diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
> index 003b8d3726e8..0e574378c6a3 100644
> --- a/fs/ext4/ext4.h
> +++ b/fs/ext4/ext4.h
> @@ -3476,6 +3476,7 @@ struct ext4_group_info {
>  	int		bb_avg_fragment_size_order;	/* order of average
>  							   fragment in BG */
>  	ext4_grpblk_t	bb_largest_free_order;/* order of largest frag in BG */
> +	ext4_grpblk_t	bb_largest_free_order_idx; /* index of largest frag */
>  	ext4_group_t	bb_group;	/* Group number */
>  	struct          list_head bb_prealloc_list;
>  #ifdef DOUBLE_CHECK
> diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
> index e6d6c2da3c6e..dc82124f0905 100644
> --- a/fs/ext4/mballoc.c
> +++ b/fs/ext4/mballoc.c
> @@ -1152,33 +1152,29 @@ static void
>  mb_set_largest_free_order(struct super_block *sb, struct ext4_group_info *grp)
>  {
>  	struct ext4_sb_info *sbi = EXT4_SB(sb);
> -	int i;
> +	int new, old = grp->bb_largest_free_order_idx;
>  
> -	for (i = MB_NUM_ORDERS(sb) - 1; i >= 0; i--)
> -		if (grp->bb_counters[i] > 0)
> +	for (new = MB_NUM_ORDERS(sb) - 1; new >= 0; new--)
> +		if (grp->bb_counters[new] > 0)
>  			break;
> +
> +	grp->bb_largest_free_order = new;
>  	/* No need to move between order lists? */
> -	if (!test_opt2(sb, MB_OPTIMIZE_SCAN) ||
> -	    i == grp->bb_largest_free_order) {
> -		grp->bb_largest_free_order = i;
> +	if (!test_opt2(sb, MB_OPTIMIZE_SCAN) || new == old)
>  		return;
> -	}
>  
> -	if (grp->bb_largest_free_order >= 0) {
> -		write_lock(&sbi->s_mb_largest_free_orders_locks[
> -					      grp->bb_largest_free_order]);
> +	if (old >= 0) {
> +		write_lock(&sbi->s_mb_largest_free_orders_locks[old]);
>  		list_del_init(&grp->bb_largest_free_order_node);
> -		write_unlock(&sbi->s_mb_largest_free_orders_locks[
> -					      grp->bb_largest_free_order]);
> +		write_unlock(&sbi->s_mb_largest_free_orders_locks[old]);
>  	}
> -	grp->bb_largest_free_order = i;
> -	if (grp->bb_largest_free_order >= 0 && grp->bb_free) {
> -		write_lock(&sbi->s_mb_largest_free_orders_locks[
> -					      grp->bb_largest_free_order]);
> +
> +	grp->bb_largest_free_order_idx = new;
> +	if (new >= 0 && grp->bb_free) {
> +		write_lock(&sbi->s_mb_largest_free_orders_locks[new]);
>  		list_add_tail(&grp->bb_largest_free_order_node,
> -		      &sbi->s_mb_largest_free_orders[grp->bb_largest_free_order]);
> -		write_unlock(&sbi->s_mb_largest_free_orders_locks[
> -					      grp->bb_largest_free_order]);
> +			      &sbi->s_mb_largest_free_orders[new]);
> +		write_unlock(&sbi->s_mb_largest_free_orders_locks[new]);
>  	}
>  }
>  
> @@ -3391,6 +3387,7 @@ int ext4_mb_add_groupinfo(struct super_block *sb, ext4_group_t group,
>  	INIT_LIST_HEAD(&meta_group_info[i]->bb_avg_fragment_size_node);
>  	meta_group_info[i]->bb_largest_free_order = -1;  /* uninit */
>  	meta_group_info[i]->bb_avg_fragment_size_order = -1;  /* uninit */
> +	meta_group_info[i]->bb_largest_free_order_idx = -1;  /* uninit */
>  	meta_group_info[i]->bb_group = group;
>  
>  	mb_group_bb_bitmap_alloc(sb, meta_group_info[i], group);
> -- 
> 2.46.1
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR
Re: [PATCH v2 10/16] ext4: fix largest free orders lists corruption on mb_optimize_scan switch
Posted by Baokun Li 3 months, 1 week ago
On 2025/6/28 3:34, Jan Kara wrote:
> On Mon 23-06-25 15:32:58, Baokun Li wrote:
>> The grp->bb_largest_free_order is updated regardless of whether
>> mb_optimize_scan is enabled. This can lead to inconsistencies between
>> grp->bb_largest_free_order and the actual s_mb_largest_free_orders list
>> index when mb_optimize_scan is repeatedly enabled and disabled via remount.
>>
>> For example, if mb_optimize_scan is initially enabled, largest free
>> order is 3, and the group is in s_mb_largest_free_orders[3]. Then,
>> mb_optimize_scan is disabled via remount, block allocations occur,
>> updating largest free order to 2. Finally, mb_optimize_scan is re-enabled
>> via remount, more block allocations update largest free order to 1.
>>
>> At this point, the group would be removed from s_mb_largest_free_orders[3]
>> under the protection of s_mb_largest_free_orders_locks[2]. This lock
>> mismatch can lead to list corruption.
>>
>> To fix this, a new field bb_largest_free_order_idx is added to struct
>> ext4_group_info to explicitly track the list index. Then still update
>> bb_largest_free_order unconditionally, but only update
>> bb_largest_free_order_idx when mb_optimize_scan is enabled. so that there
>> is no inconsistency between the lock and the data to be protected.
>>
>> Fixes: 196e402adf2e ("ext4: improve cr 0 / cr 1 group scanning")
>> CC: stable@vger.kernel.org
>> Signed-off-by: Baokun Li <libaokun1@huawei.com>
> Hum, rather than duplicating index like this, couldn't we add to
> mb_set_largest_free_order():
>
> 	/* Did mb_optimize_scan setting change? */
> 	if (!test_opt2(sb, MB_OPTIMIZE_SCAN) &&
> 	    !list_empty(&grp->bb_largest_free_order_node)) {
> 		write_lock(&sbi->s_mb_largest_free_orders_locks[old]);
> 		list_del_init(&grp->bb_largest_free_order_node);
> 		write_unlock(&sbi->s_mb_largest_free_orders_locks[old]);
> 	}
>
> Also arguably we should reinit bb lists when mb_optimize_scan gets
> reenabled because otherwise inconsistent lists could lead to suboptimal
> results... But that's less important to fix I guess.
>
> 								Honza

Yeah, this looks good. We just need to remove groups modified when
mb_optimize_scan=0 from the list. Groups that remain in the list after
mb_optimize_scan is re-enabled can be used normally.

As for the groups that were removed, they will be re-added to their
corresponding lists during block freeing or block allocation when
cr >= CR_GOAL_LEN_SLOW. So, I agree that we don't need to explicitly
reinit them.



Cheers,
Baokun

>> ---
>>   fs/ext4/ext4.h    |  1 +
>>   fs/ext4/mballoc.c | 35 ++++++++++++++++-------------------
>>   2 files changed, 17 insertions(+), 19 deletions(-)
>>
>> diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
>> index 003b8d3726e8..0e574378c6a3 100644
>> --- a/fs/ext4/ext4.h
>> +++ b/fs/ext4/ext4.h
>> @@ -3476,6 +3476,7 @@ struct ext4_group_info {
>>   	int		bb_avg_fragment_size_order;	/* order of average
>>   							   fragment in BG */
>>   	ext4_grpblk_t	bb_largest_free_order;/* order of largest frag in BG */
>> +	ext4_grpblk_t	bb_largest_free_order_idx; /* index of largest frag */
>>   	ext4_group_t	bb_group;	/* Group number */
>>   	struct          list_head bb_prealloc_list;
>>   #ifdef DOUBLE_CHECK
>> diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
>> index e6d6c2da3c6e..dc82124f0905 100644
>> --- a/fs/ext4/mballoc.c
>> +++ b/fs/ext4/mballoc.c
>> @@ -1152,33 +1152,29 @@ static void
>>   mb_set_largest_free_order(struct super_block *sb, struct ext4_group_info *grp)
>>   {
>>   	struct ext4_sb_info *sbi = EXT4_SB(sb);
>> -	int i;
>> +	int new, old = grp->bb_largest_free_order_idx;
>>   
>> -	for (i = MB_NUM_ORDERS(sb) - 1; i >= 0; i--)
>> -		if (grp->bb_counters[i] > 0)
>> +	for (new = MB_NUM_ORDERS(sb) - 1; new >= 0; new--)
>> +		if (grp->bb_counters[new] > 0)
>>   			break;
>> +
>> +	grp->bb_largest_free_order = new;
>>   	/* No need to move between order lists? */
>> -	if (!test_opt2(sb, MB_OPTIMIZE_SCAN) ||
>> -	    i == grp->bb_largest_free_order) {
>> -		grp->bb_largest_free_order = i;
>> +	if (!test_opt2(sb, MB_OPTIMIZE_SCAN) || new == old)
>>   		return;
>> -	}
>>   
>> -	if (grp->bb_largest_free_order >= 0) {
>> -		write_lock(&sbi->s_mb_largest_free_orders_locks[
>> -					      grp->bb_largest_free_order]);
>> +	if (old >= 0) {
>> +		write_lock(&sbi->s_mb_largest_free_orders_locks[old]);
>>   		list_del_init(&grp->bb_largest_free_order_node);
>> -		write_unlock(&sbi->s_mb_largest_free_orders_locks[
>> -					      grp->bb_largest_free_order]);
>> +		write_unlock(&sbi->s_mb_largest_free_orders_locks[old]);
>>   	}
>> -	grp->bb_largest_free_order = i;
>> -	if (grp->bb_largest_free_order >= 0 && grp->bb_free) {
>> -		write_lock(&sbi->s_mb_largest_free_orders_locks[
>> -					      grp->bb_largest_free_order]);
>> +
>> +	grp->bb_largest_free_order_idx = new;
>> +	if (new >= 0 && grp->bb_free) {
>> +		write_lock(&sbi->s_mb_largest_free_orders_locks[new]);
>>   		list_add_tail(&grp->bb_largest_free_order_node,
>> -		      &sbi->s_mb_largest_free_orders[grp->bb_largest_free_order]);
>> -		write_unlock(&sbi->s_mb_largest_free_orders_locks[
>> -					      grp->bb_largest_free_order]);
>> +			      &sbi->s_mb_largest_free_orders[new]);
>> +		write_unlock(&sbi->s_mb_largest_free_orders_locks[new]);
>>   	}
>>   }
>>   
>> @@ -3391,6 +3387,7 @@ int ext4_mb_add_groupinfo(struct super_block *sb, ext4_group_t group,
>>   	INIT_LIST_HEAD(&meta_group_info[i]->bb_avg_fragment_size_node);
>>   	meta_group_info[i]->bb_largest_free_order = -1;  /* uninit */
>>   	meta_group_info[i]->bb_avg_fragment_size_order = -1;  /* uninit */
>> +	meta_group_info[i]->bb_largest_free_order_idx = -1;  /* uninit */
>>   	meta_group_info[i]->bb_group = group;
>>   
>>   	mb_group_bb_bitmap_alloc(sb, meta_group_info[i], group);
>> -- 
>> 2.46.1
>>