From nobody Thu Oct 9 01:13:39 2025 Received: from szxga06-in.huawei.com (szxga06-in.huawei.com [45.249.212.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 26BFD20298D; Mon, 23 Jun 2025 07:47:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.32 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750664826; cv=none; b=XQmajQA5wiibVPUqTu9vGauRVMbh7rmKPQ48G4Esc1IpHHEjsSu4cmj/0NnW6I/aO/czusra+s0hVPQHwyuJ17af59rUS5NzRgU1KY1Fv8fYbpzRe7WeXzZYB5q9dKjDVRWt2wU/YgDp5ghtxeg4/2jbVM+cC6LAiypnrNwxncE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750664826; c=relaxed/simple; bh=hSJ7W6iMQ4xKe7LZ1zPyhGNnD5O5rE8E3Ff+PUcoFOs=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=c+mGQJUwD3S4a7QJDRXqvoBkWE9eugOCwsKt4fpnMfgRHjPASYwSuE/r4Yx1ILxjwWXOYAwbXoZO/Al0+jgAHKCPJ5NNs8PDBV1ZyhTbLm38lZh6cX25gqo5EDpBJZuZI+F/LaWepnRZjLGB2yKrn+N8UuAeJ28Bq1qqcZP4FTc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.32 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.163]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4bQgDF0gRhz2QVJ9; Mon, 23 Jun 2025 15:47:57 +0800 (CST) Received: from dggpemf500013.china.huawei.com (unknown [7.185.36.188]) by mail.maildlp.com (Postfix) with ESMTPS id 526CB18005F; Mon, 23 Jun 2025 15:47:02 +0800 (CST) Received: from huawei.com (10.175.112.188) by dggpemf500013.china.huawei.com (7.185.36.188) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Mon, 23 Jun 2025 15:47:01 +0800 From: Baokun Li To: CC: , , , , , , , , Subject: [PATCH v2 09/16] ext4: fix zombie groups in average fragment size lists Date: Mon, 23 Jun 2025 15:32:57 +0800 Message-ID: <20250623073304.3275702-10-libaokun1@huawei.com> X-Mailer: git-send-email 2.46.1 In-Reply-To: <20250623073304.3275702-1-libaokun1@huawei.com> References: <20250623073304.3275702-1-libaokun1@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems200001.china.huawei.com (7.221.188.67) To dggpemf500013.china.huawei.com (7.185.36.188) Content-Type: text/plain; charset="utf-8" Groups with no free blocks shouldn't be in any average fragment size list. However, when all blocks in a group are allocated(i.e., bb_fragments or bb_free is 0), we currently skip updating the average fragment size, which means the group isn't removed from its previous s_mb_avg_fragment_size[old] list. This created "zombie" groups that were always skipped during traversal as they couldn't satisfy any block allocation requests, negatively impacting traversal efficiency. Therefore, when a group becomes completely free, bb_avg_fragment_size_order is now set to -1. If the old order was not -1, a removal operation is performed; if the new order is not -1, an insertion is performed. Fixes: 196e402adf2e ("ext4: improve cr 0 / cr 1 group scanning") CC: stable@vger.kernel.org Signed-off-by: Baokun Li Reviewed-by: Jan Kara --- fs/ext4/mballoc.c | 36 ++++++++++++++++++------------------ 1 file changed, 18 insertions(+), 18 deletions(-) diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c index 94950b07a577..e6d6c2da3c6e 100644 --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -841,30 +841,30 @@ static void mb_update_avg_fragment_size(struct super_block *sb, struct ext4_group_info= *grp) { struct ext4_sb_info *sbi =3D EXT4_SB(sb); - int new_order; + int new, old; =20 - if (!test_opt2(sb, MB_OPTIMIZE_SCAN) || grp->bb_fragments =3D=3D 0) + if (!test_opt2(sb, MB_OPTIMIZE_SCAN)) return; =20 - new_order =3D mb_avg_fragment_size_order(sb, - grp->bb_free / grp->bb_fragments); - if (new_order =3D=3D grp->bb_avg_fragment_size_order) + old =3D grp->bb_avg_fragment_size_order; + new =3D grp->bb_fragments =3D=3D 0 ? -1 : + mb_avg_fragment_size_order(sb, grp->bb_free / grp->bb_fragments); + if (new =3D=3D old) return; =20 - if (grp->bb_avg_fragment_size_order !=3D -1) { - write_lock(&sbi->s_mb_avg_fragment_size_locks[ - grp->bb_avg_fragment_size_order]); + if (old >=3D 0) { + write_lock(&sbi->s_mb_avg_fragment_size_locks[old]); list_del(&grp->bb_avg_fragment_size_node); - write_unlock(&sbi->s_mb_avg_fragment_size_locks[ - grp->bb_avg_fragment_size_order]); - } - grp->bb_avg_fragment_size_order =3D new_order; - write_lock(&sbi->s_mb_avg_fragment_size_locks[ - grp->bb_avg_fragment_size_order]); - list_add_tail(&grp->bb_avg_fragment_size_node, - &sbi->s_mb_avg_fragment_size[grp->bb_avg_fragment_size_order]); - write_unlock(&sbi->s_mb_avg_fragment_size_locks[ - grp->bb_avg_fragment_size_order]); + write_unlock(&sbi->s_mb_avg_fragment_size_locks[old]); + } + + grp->bb_avg_fragment_size_order =3D new; + if (new >=3D 0) { + write_lock(&sbi->s_mb_avg_fragment_size_locks[new]); + list_add_tail(&grp->bb_avg_fragment_size_node, + &sbi->s_mb_avg_fragment_size[new]); + write_unlock(&sbi->s_mb_avg_fragment_size_locks[new]); + } } =20 /* --=20 2.46.1