From nobody Fri Sep 20 22:15:00 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68AB8C433EF for ; Sat, 28 May 2022 10:47:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1355027AbiE1KrF (ORCPT ); Sat, 28 May 2022 06:47:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32870 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241182AbiE1Kq5 (ORCPT ); Sat, 28 May 2022 06:46:57 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1273313F02; Sat, 28 May 2022 03:46:56 -0700 (PDT) Received: from dggpeml500020.china.huawei.com (unknown [172.30.72.57]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4L9JGl5VcRzgY8R; Sat, 28 May 2022 18:45:19 +0800 (CST) Received: from huawei.com (10.175.127.227) by dggpeml500020.china.huawei.com (7.185.36.88) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 28 May 2022 18:46:54 +0800 From: Baokun Li To: CC: , , , , , , , , , , Hulk Robot Subject: [PATCH v3 1/3] ext4: fix bug_on ext4_mb_use_inode_pa Date: Sat, 28 May 2022 19:00:15 +0800 Message-ID: <20220528110017.354175-2-libaokun1@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220528110017.354175-1-libaokun1@huawei.com> References: <20220528110017.354175-1-libaokun1@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpeml500020.china.huawei.com (7.185.36.88) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Hulk Robot reported a BUG_ON: =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D kernel BUG at fs/ext4/mballoc.c:3211! [...] RIP: 0010:ext4_mb_mark_diskspace_used.cold+0x85/0x136f [...] Call Trace: ext4_mb_new_blocks+0x9df/0x5d30 ext4_ext_map_blocks+0x1803/0x4d80 ext4_map_blocks+0x3a4/0x1a10 ext4_writepages+0x126d/0x2c30 do_writepages+0x7f/0x1b0 __filemap_fdatawrite_range+0x285/0x3b0 file_write_and_wait_range+0xb1/0x140 ext4_sync_file+0x1aa/0xca0 vfs_fsync_range+0xfb/0x260 do_fsync+0x48/0xa0 [...] =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Above issue may happen as follows: Reported-by: Hulk Robot Reviewed-by: Ritesh Harjani ------------------------------------- do_fsync vfs_fsync_range ext4_sync_file file_write_and_wait_range __filemap_fdatawrite_range do_writepages ext4_writepages mpage_map_and_submit_extent mpage_map_one_extent ext4_map_blocks ext4_mb_new_blocks ext4_mb_normalize_request >>> start + size <=3D ac->ac_o_ex.fe_logical ext4_mb_regular_allocator ext4_mb_simple_scan_group ext4_mb_use_best_found ext4_mb_new_preallocation ext4_mb_new_inode_pa ext4_mb_use_inode_pa >>> set ac->ac_b_ex.fe_len <=3D 0 ext4_mb_mark_diskspace_used >>> BUG_ON(ac->ac_b_ex.fe_len <=3D 0); we can easily reproduce this problem with the following commands: `fallocate -l100M disk` `mkfs.ext4 -b 1024 -g 256 disk` `mount disk /mnt` `fsstress -d /mnt -l 0 -n 1000 -p 1` The size must be smaller than or equal to EXT4_BLOCKS_PER_GROUP. Therefore, "start + size <=3D ac->ac_o_ex.fe_logical" may occur when the size is truncated. So start should be the start position of the group where ac_o_ex.fe_logical is located after alignment. In addition, when the value of fe_logical or EXT4_BLOCKS_PER_GROUP is very large, the value calculated by start_off is more accurate. Fixes: cd648b8a8fd5 ("ext4: trim allocation requests to group size") Reported-by: Hulk Robot Signed-off-by: Baokun Li --- V1->V2: Replace round_down() with rounddown(). Modified comments. V2->V3: Convert EXT4_BLOCKS_PER_GROUP type to ext4_lblk_t to avoid compilation warnings. fs/ext4/mballoc.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c index 9f12f29bc346..4d3740fdff90 100644 --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -4104,6 +4104,15 @@ ext4_mb_normalize_request(struct ext4_allocation_con= text *ac, size =3D size >> bsbits; start =3D start_off >> bsbits; =20 + /* + * For tiny groups (smaller than 8MB) the chosen allocation + * alignment may be larger than group size. Make sure the + * alignment does not move allocation to a different group which + * makes mballoc fail assertions later. + */ + start =3D max(start, rounddown(ac->ac_o_ex.fe_logical, + (ext4_lblk_t)EXT4_BLOCKS_PER_GROUP(ac->ac_sb))); + /* don't cover already allocated blocks in selected range */ if (ar->pleft && start <=3D ar->lleft) { size -=3D ar->lleft + 1 - start; --=20 2.31.1 From nobody Fri Sep 20 22:15:00 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3BBCC433EF for ; Sat, 28 May 2022 10:47:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1355112AbiE1KrI (ORCPT ); Sat, 28 May 2022 06:47:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32878 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348018AbiE1Kq6 (ORCPT ); Sat, 28 May 2022 06:46:58 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A040813F1E; Sat, 28 May 2022 03:46:56 -0700 (PDT) Received: from dggpeml500020.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4L9JGm2dDczgYN2; Sat, 28 May 2022 18:45:20 +0800 (CST) Received: from huawei.com (10.175.127.227) by dggpeml500020.china.huawei.com (7.185.36.88) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 28 May 2022 18:46:54 +0800 From: Baokun Li To: CC: , , , , , , , , , Subject: [PATCH v3 2/3] ext4: correct the judgment of BUG in ext4_mb_normalize_request Date: Sat, 28 May 2022 19:00:16 +0800 Message-ID: <20220528110017.354175-3-libaokun1@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220528110017.354175-1-libaokun1@huawei.com> References: <20220528110017.354175-1-libaokun1@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpeml500020.china.huawei.com (7.185.36.88) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" ext4_mb_normalize_request() can move logical start of allocated blocks to reduce fragmentation and better utilize preallocation. However logical block requested as a start of allocation (ac->ac_o_ex.fe_logical) should always be covered by allocated blocks so we should check that by modifying and to or in the assertion. Signed-off-by: Baokun Li Reviewed-by: Ritesh Harjani --- V1->V2: Change Fixes from dfe076c106f6 to c9de560ded61. V2->V3: Delete Fixes tag. Add more comments and commit logs to make the code easier to understand. fs/ext4/mballoc.c | 17 ++++++++++++++++- 1 file changed, 16 insertions(+), 1 deletion(-) diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c index 4d3740fdff90..9e06334771a3 100644 --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -4185,7 +4185,22 @@ ext4_mb_normalize_request(struct ext4_allocation_con= text *ac, } rcu_read_unlock(); =20 - if (start + size <=3D ac->ac_o_ex.fe_logical && + /* + * In this function "start" and "size" are normalized for better + * alignment and length such that we could preallocate more blocks. + * This normalization is done such that original request of + * ac->ac_o_ex.fe_logical & fe_len should always lie within "start" and + * "size" boundaries. + * (Note fe_len can be relaxed since FS block allocation API does not + * provide gurantee on number of contiguous blocks allocation since that + * depends upon free space left, etc). + * In case of inode pa, later we use the allocated blocks + * [pa_start + fe_logical - pa_lstart, fe_len/size] from the preallocated + * range of goal/best blocks [start, size] to put it at the + * ac_o_ex.fe_logical extent of this inode. + * (See ext4_mb_use_inode_pa() for more details) + */ + if (start + size <=3D ac->ac_o_ex.fe_logical || start > ac->ac_o_ex.fe_logical) { ext4_msg(ac->ac_sb, KERN_ERR, "start %lu, size %lu, fe_logical %lu", --=20 2.31.1 From nobody Fri Sep 20 22:15:00 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BDD86C433EF for ; Sat, 28 May 2022 10:47:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1353774AbiE1KrR (ORCPT ); Sat, 28 May 2022 06:47:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32884 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350071AbiE1Kq7 (ORCPT ); Sat, 28 May 2022 06:46:59 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 65BAB13D66; Sat, 28 May 2022 03:46:57 -0700 (PDT) Received: from dggpeml500020.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4L9JHM3v4MzjWyv; Sat, 28 May 2022 18:45:51 +0800 (CST) Received: from huawei.com (10.175.127.227) by dggpeml500020.china.huawei.com (7.185.36.88) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 28 May 2022 18:46:55 +0800 From: Baokun Li To: CC: , , , , , , , , , Subject: [PATCH v3 3/3] ext4: support flex_bg in ext4_mb_normalize_request Date: Sat, 28 May 2022 19:00:17 +0800 Message-ID: <20220528110017.354175-4-libaokun1@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220528110017.354175-1-libaokun1@huawei.com> References: <20220528110017.354175-1-libaokun1@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpeml500020.china.huawei.com (7.185.36.88) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" In ext4_mb_normalize_request, the size of the allocation request is limited to no more than EXT4_BLOCKS_PER_GROUP. Ritesh mentions that this does not take into account the case of flex_bg groups. So we should add support for flex_bg to make the physical blocks of large files contiguous. Signed-off-by: Baokun Li --- fs/ext4/mballoc.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c index 9e06334771a3..253fc250e9a0 100644 --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -4028,6 +4028,7 @@ ext4_mb_normalize_request(struct ext4_allocation_cont= ext *ac, loff_t size, start_off; loff_t orig_size __maybe_unused; ext4_lblk_t start; + ext4_lblk_t bpg; struct ext4_inode_info *ei =3D EXT4_I(ac->ac_inode); struct ext4_prealloc_space *pa; =20 @@ -4051,6 +4052,11 @@ ext4_mb_normalize_request(struct ext4_allocation_con= text *ac, } =20 bsbits =3D ac->ac_sb->s_blocksize_bits; + bpg =3D EXT4_BLOCKS_PER_GROUP(ac->ac_sb); + if (ext4_has_feature_flex_bg(ac->ac_sb) && sbi->s_log_groups_per_flex) { + if (check_shl_overflow(bpg, sbi->s_log_groups_per_flex, &bpg)) + bpg =3D EXT_MAX_BLOCKS; + } =20 /* first, let's learn actual file size * given current request is allocated */ @@ -4110,8 +4116,7 @@ ext4_mb_normalize_request(struct ext4_allocation_cont= ext *ac, * alignment does not move allocation to a different group which * makes mballoc fail assertions later. */ - start =3D max(start, rounddown(ac->ac_o_ex.fe_logical, - (ext4_lblk_t)EXT4_BLOCKS_PER_GROUP(ac->ac_sb))); + start =3D max(start, rounddown(ac->ac_o_ex.fe_logical, bpg)); =20 /* don't cover already allocated blocks in selected range */ if (ar->pleft && start <=3D ar->lleft) { @@ -4125,8 +4130,8 @@ ext4_mb_normalize_request(struct ext4_allocation_cont= ext *ac, * Trim allocation request for filesystems with artificially small * groups. */ - if (size > EXT4_BLOCKS_PER_GROUP(ac->ac_sb)) - size =3D EXT4_BLOCKS_PER_GROUP(ac->ac_sb); + if (size > bpg) + size =3D bpg; =20 end =3D start + size; =20 @@ -4208,7 +4213,7 @@ ext4_mb_normalize_request(struct ext4_allocation_cont= ext *ac, (unsigned long) ac->ac_o_ex.fe_logical); BUG(); } - BUG_ON(size <=3D 0 || size > EXT4_BLOCKS_PER_GROUP(ac->ac_sb)); + BUG_ON(size <=3D 0 || size > bpg); =20 /* now prepare goal request */ =20 --=20 2.31.1