From nobody Mon Sep 8 09:01:25 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC9D6C001E0 for ; Mon, 24 Jul 2023 12:14:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230312AbjGXMOP (ORCPT ); Mon, 24 Jul 2023 08:14:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43754 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230248AbjGXMOA (ORCPT ); Mon, 24 Jul 2023 08:14:00 -0400 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7F229E61; Mon, 24 Jul 2023 05:13:59 -0700 (PDT) Received: from dggpeml500021.china.huawei.com (unknown [172.30.72.54]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4R8fCJ2fnqzHqdQ; Mon, 24 Jul 2023 20:11:24 +0800 (CST) Received: from huawei.com (10.175.127.227) by dggpeml500021.china.huawei.com (7.185.36.21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Mon, 24 Jul 2023 20:13:56 +0800 From: Baokun Li To: CC: , , , , , , , , , Subject: [PATCH v2 3/3] ext4: avoid overlapping preallocations due to overflow Date: Mon, 24 Jul 2023 20:10:59 +0800 Message-ID: <20230724121059.11834-4-libaokun1@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230724121059.11834-1-libaokun1@huawei.com> References: <20230724121059.11834-1-libaokun1@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpeml500021.china.huawei.com (7.185.36.21) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Let's say we want to allocate 2 blocks starting from 4294966386, after predicting the file size, start is aligned to 4294965248, len is changed to 2048, then end =3D start + size =3D 0x100000000. Since end is of type ext4_lblk_t, i.e. uint, end is truncated to 0. This causes (pa->pa_lstart >=3D end) to always hold when checking if the current extent to be allocated crosses already preallocated blocks, so the resulting ac_g_ex may cross already preallocated blocks. Hence we convert the end type to loff_t and use pa_logical_end() to avoid overflow. Signed-off-by: Baokun Li Reviewed-by: Jan Kara Reviewed-by: Ritesh Harjani (IBM) --- fs/ext4/mballoc.c | 21 ++++++++++----------- 1 file changed, 10 insertions(+), 11 deletions(-) diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c index 86bce870dc5a..78a4a24e2f57 100644 --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -4222,12 +4222,13 @@ ext4_mb_pa_rb_next_iter(ext4_lblk_t new_start, ext4= _lblk_t cur_start, struct rb_ =20 static inline void ext4_mb_pa_assert_overlap(struct ext4_allocation_context *ac, - ext4_lblk_t start, ext4_lblk_t end) + ext4_lblk_t start, loff_t end) { struct ext4_sb_info *sbi =3D EXT4_SB(ac->ac_sb); struct ext4_inode_info *ei =3D EXT4_I(ac->ac_inode); struct ext4_prealloc_space *tmp_pa; - ext4_lblk_t tmp_pa_start, tmp_pa_end; + ext4_lblk_t tmp_pa_start; + loff_t tmp_pa_end; struct rb_node *iter; =20 read_lock(&ei->i_prealloc_lock); @@ -4236,7 +4237,7 @@ ext4_mb_pa_assert_overlap(struct ext4_allocation_cont= ext *ac, tmp_pa =3D rb_entry(iter, struct ext4_prealloc_space, pa_node.inode_node); tmp_pa_start =3D tmp_pa->pa_lstart; - tmp_pa_end =3D tmp_pa->pa_lstart + EXT4_C2B(sbi, tmp_pa->pa_len); + tmp_pa_end =3D pa_logical_end(sbi, tmp_pa); =20 spin_lock(&tmp_pa->pa_lock); if (tmp_pa->pa_deleted =3D=3D 0) @@ -4258,14 +4259,14 @@ ext4_mb_pa_assert_overlap(struct ext4_allocation_co= ntext *ac, */ static inline void ext4_mb_pa_adjust_overlap(struct ext4_allocation_context *ac, - ext4_lblk_t *start, ext4_lblk_t *end) + ext4_lblk_t *start, loff_t *end) { struct ext4_inode_info *ei =3D EXT4_I(ac->ac_inode); struct ext4_sb_info *sbi =3D EXT4_SB(ac->ac_sb); struct ext4_prealloc_space *tmp_pa =3D NULL, *left_pa =3D NULL, *right_pa= =3D NULL; struct rb_node *iter; - ext4_lblk_t new_start, new_end; - ext4_lblk_t tmp_pa_start, tmp_pa_end, left_pa_end =3D -1, right_pa_start = =3D -1; + ext4_lblk_t new_start, tmp_pa_start, right_pa_start =3D -1; + loff_t new_end, tmp_pa_end, left_pa_end =3D -1; =20 new_start =3D *start; new_end =3D *end; @@ -4284,7 +4285,7 @@ ext4_mb_pa_adjust_overlap(struct ext4_allocation_cont= ext *ac, tmp_pa =3D rb_entry(iter, struct ext4_prealloc_space, pa_node.inode_node); tmp_pa_start =3D tmp_pa->pa_lstart; - tmp_pa_end =3D tmp_pa->pa_lstart + EXT4_C2B(sbi, tmp_pa->pa_len); + tmp_pa_end =3D pa_logical_end(sbi, tmp_pa); =20 /* PA must not overlap original request */ spin_lock(&tmp_pa->pa_lock); @@ -4364,8 +4365,7 @@ ext4_mb_pa_adjust_overlap(struct ext4_allocation_cont= ext *ac, } =20 if (left_pa) { - left_pa_end =3D - left_pa->pa_lstart + EXT4_C2B(sbi, left_pa->pa_len); + left_pa_end =3D pa_logical_end(sbi, left_pa); BUG_ON(left_pa_end > ac->ac_o_ex.fe_logical); } =20 @@ -4404,8 +4404,7 @@ ext4_mb_normalize_request(struct ext4_allocation_cont= ext *ac, struct ext4_sb_info *sbi =3D EXT4_SB(ac->ac_sb); struct ext4_super_block *es =3D sbi->s_es; int bsbits, max; - ext4_lblk_t end; - loff_t size, start_off; + loff_t size, start_off, end; loff_t orig_size __maybe_unused; ext4_lblk_t start; =20 --=20 2.31.1