From nobody Fri Apr 3 03:01:32 2026 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 49630391E7F; Wed, 25 Mar 2026 07:53:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774425207; cv=none; b=cEemHlLUkAFOe/fWFxm6lWwDV7WSuzTPmerwiOx6K1YBeNFFRidSpLY4EL90j3R6pV7ntSVeYrlAySwMC3QcxR6Nrv690QooscxHTPU56C22KNUo0CeMCIMPWTqv+qInkvC2KX6t6La8EZKArVPTj+ISyify/6BA8D4JMHSbnxs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774425207; c=relaxed/simple; bh=viFvCHynwN2QwKfgVv4CMz86s7mPdhW8AzLlTm4Rt7I=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XGkDnt4tFs/+svo03dILbZOzf/CTUebegkgli2ffh70FI+lsyvzzlLumA2qwnUfF3aHGeVNGTrA33DhhWyaUs64cqFpISVbaeMvGsZHquLzOzdTo2UIewbSP4t/iwL//o1mTTRN8bY8JCXlShOh38G+Pha22afl88M1VqwNiVUQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=none smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.170]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTPS id 4fgdtd1RPqzYQtwW; Wed, 25 Mar 2026 15:33:29 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.112]) by mail.maildlp.com (Postfix) with ESMTP id 317124056E; Wed, 25 Mar 2026 15:33:39 +0800 (CST) Received: from huaweicloud.com (unknown [10.50.85.155]) by APP1 (Coremail) with SMTP id cCh0CgAHC9vFj8NpuR6cCA--.49898S5; Wed, 25 Mar 2026 15:33:38 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ojaswin@linux.ibm.com, ritesh.list@gmail.com, libaokun@linux.alibaba.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, yizhang089@gmail.com, yangerkun@huawei.com, yukuai@fnnas.com Subject: [PATCH v2 01/10] ext4: add did_zero output parameter to ext4_block_zero_page_range() Date: Wed, 25 Mar 2026 15:28:40 +0800 Message-ID: <20260325072850.3997161-2-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260325072850.3997161-1-yi.zhang@huaweicloud.com> References: <20260325072850.3997161-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: cCh0CgAHC9vFj8NpuR6cCA--.49898S5 X-Coremail-Antispam: 1UD129KBjvJXoWxCF17XryfAw18KF1rKw15Jwb_yoW5uw1xpr y5t345ur47u34q9F4xWF1jvr1Sk3Z3GFW8W3y3G3s0v3yIq3WxKF95K3WFvF4jg3y7Xay0 qF4Yy3y2gw1UArJanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUm014x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_Jr4l82xGYIkIc2 x26xkF7I0E14v26r4j6ryUM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F4UJw A2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq3wAS 0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7IYx2 IY67AKxVWUGVWUXwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4UM4x0 Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwACI402YVCY1x02628vn2kIc2 xKxwCY1x0262kKe7AKxVWUtVW8ZwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWU JVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67 kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY 6xIIjxv20xvEc7CjxVAFwI0_Gr0_Cr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0x vEx4A2jsIE14v26r1j6r4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVj vjDU0xZFpf9x0JUfKs8UUUUU= X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ Content-Type: text/plain; charset="utf-8" From: Zhang Yi Add a bool *did_zero output parameter to ext4_block_zero_page_range() and __ext4_block_zero_page_range(). The parameter reports whether a partial block was zeroed out, which is needed for the upcoming iomap buffered I/O conversion. Signed-off-by: Zhang Yi Reviewed-by: Jan Kara --- fs/ext4/inode.c | 23 ++++++++++++++--------- 1 file changed, 14 insertions(+), 9 deletions(-) diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index af6d1759c8de..1c9474d5d11d 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -4003,7 +4003,8 @@ void ext4_set_aops(struct inode *inode) * racing writeback can come later and flush the stale pagecache to disk. */ static int __ext4_block_zero_page_range(handle_t *handle, - struct address_space *mapping, loff_t from, loff_t length) + struct address_space *mapping, loff_t from, loff_t length, + bool *did_zero) { unsigned int offset, blocksize, pos; ext4_lblk_t iblock; @@ -4091,6 +4092,8 @@ static int __ext4_block_zero_page_range(handle_t *han= dle, err =3D ext4_jbd2_inode_add_write(handle, inode, from, length); } + if (!err && did_zero) + *did_zero =3D true; =20 unlock: folio_unlock(folio); @@ -4106,7 +4109,8 @@ static int __ext4_block_zero_page_range(handle_t *han= dle, * that corresponds to 'from' */ static int ext4_block_zero_page_range(handle_t *handle, - struct address_space *mapping, loff_t from, loff_t length) + struct address_space *mapping, loff_t from, loff_t length, + bool *did_zero) { struct inode *inode =3D mapping->host; unsigned blocksize =3D inode->i_sb->s_blocksize; @@ -4120,10 +4124,11 @@ static int ext4_block_zero_page_range(handle_t *han= dle, length =3D max; =20 if (IS_DAX(inode)) { - return dax_zero_range(inode, from, length, NULL, + return dax_zero_range(inode, from, length, did_zero, &ext4_iomap_ops); } - return __ext4_block_zero_page_range(handle, mapping, from, length); + return __ext4_block_zero_page_range(handle, mapping, from, length, + did_zero); } =20 /* @@ -4146,7 +4151,7 @@ static int ext4_block_truncate_page(handle_t *handle, blocksize =3D i_blocksize(inode); length =3D blocksize - (from & (blocksize - 1)); =20 - return ext4_block_zero_page_range(handle, mapping, from, length); + return ext4_block_zero_page_range(handle, mapping, from, length, NULL); } =20 int ext4_zero_partial_blocks(handle_t *handle, struct inode *inode, @@ -4169,13 +4174,13 @@ int ext4_zero_partial_blocks(handle_t *handle, stru= ct inode *inode, if (start =3D=3D end && (partial_start || (partial_end !=3D sb->s_blocksize - 1))) { err =3D ext4_block_zero_page_range(handle, mapping, - lstart, length); + lstart, length, NULL); return err; } /* Handle partial zero out on the start of the range */ if (partial_start) { - err =3D ext4_block_zero_page_range(handle, mapping, - lstart, sb->s_blocksize); + err =3D ext4_block_zero_page_range(handle, mapping, lstart, + sb->s_blocksize, NULL); if (err) return err; } @@ -4183,7 +4188,7 @@ int ext4_zero_partial_blocks(handle_t *handle, struct= inode *inode, if (partial_end !=3D sb->s_blocksize - 1) err =3D ext4_block_zero_page_range(handle, mapping, byte_end - partial_end, - partial_end + 1); + partial_end + 1, NULL); return err; } =20 --=20 2.52.0 From nobody Fri Apr 3 03:01:32 2026 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 993EA27AC4C; Wed, 25 Mar 2026 07:33:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774424028; cv=none; b=iidLM3eQbM/kBJHnjdwOs26yiGb0zZu53qecSu9IWYDcMMi7A5j4KQkWuZP3jUCJucXiESlsOripqJoTCZN8GHDjBti1H7GgegyIJ5TWu2pzSHnZByk6nGObBIEogiAulIE/3YZyh6YhJLwIrja43gSetzQYMeBBJCOxIpB6IiU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774424028; c=relaxed/simple; bh=KKd39nSq2/Q+ECUd1egjak+o/qKi+d5in9fXewxY3bc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kQzHJrPcr/RSQpn7PgKZ1uF5Nbg2AT48RpTbP+BmvkrINhLxZMX3G95/RyLOnPMIhADVRfPopu9QXgSB1Hr5UQ+1C/qDVuR4oiLYWxeExyiMk3t3nIk2sy1gqtEbV5z9j2puaVxz3JmQJMJhb1jotvUDlVfNW9pht79b3Y6lqe0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.198]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTPS id 4fgdt46mQTzKHMYn; Wed, 25 Mar 2026 15:33:00 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.112]) by mail.maildlp.com (Postfix) with ESMTP id 4876740575; Wed, 25 Mar 2026 15:33:39 +0800 (CST) Received: from huaweicloud.com (unknown [10.50.85.155]) by APP1 (Coremail) with SMTP id cCh0CgAHC9vFj8NpuR6cCA--.49898S6; Wed, 25 Mar 2026 15:33:39 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ojaswin@linux.ibm.com, ritesh.list@gmail.com, libaokun@linux.alibaba.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, yizhang089@gmail.com, yangerkun@huawei.com, yukuai@fnnas.com Subject: [PATCH v2 02/10] ext4: ext4_block_truncate_page() returns zeroed length on success Date: Wed, 25 Mar 2026 15:28:41 +0800 Message-ID: <20260325072850.3997161-3-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260325072850.3997161-1-yi.zhang@huaweicloud.com> References: <20260325072850.3997161-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: cCh0CgAHC9vFj8NpuR6cCA--.49898S6 X-Coremail-Antispam: 1UD129KBjvJXoWxJrWrXFW3Jw4xWFyUWF1xZrb_yoW8AFykpr 98K3yrGr4Dua4q9an29F1aqr1ak3WfGFW8Way7K34Y934fXF1xKF93KF1Fva1jqrWxXayj qF45tFWa9w17A3DanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUm014x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_Jryl82xGYIkIc2 x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F4UJw A2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq3wAS 0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7IYx2 IY67AKxVWUGVWUXwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4UM4x0 Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwACI402YVCY1x02628vn2kIc2 xKxwCY1x0262kKe7AKxVWUtVW8ZwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWU JVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67 kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY 6xIIjxv20xvEc7CjxVAFwI0_Gr0_Cr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0x vEx4A2jsIE14v26r1j6r4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVj vjDU0xZFpf9x0JUADGOUUUUU= X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ Content-Type: text/plain; charset="utf-8" From: Zhang Yi Return the actual zeroed length instead of 0 on success. This prepares for the upcoming iomap buffered I/O conversion by exposing zeroed length information to callers. Signed-off-by: Zhang Yi Reviewed-by: Jan Kara --- fs/ext4/inode.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 1c9474d5d11d..f21be26b4642 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -4136,6 +4136,7 @@ static int ext4_block_zero_page_range(handle_t *handl= e, * up to the end of the block which corresponds to `from'. * This required during truncate. We need to physically zero the tail end * of that block so it doesn't yield old data if the file is later grown. + * Return the zeroed length on success. */ static int ext4_block_truncate_page(handle_t *handle, struct address_space *mapping, loff_t from) @@ -4143,6 +4144,8 @@ static int ext4_block_truncate_page(handle_t *handle, unsigned length; unsigned blocksize; struct inode *inode =3D mapping->host; + bool did_zero =3D false; + int err; =20 /* If we are processing an encrypted inode during orphan list handling */ if (IS_ENCRYPTED(inode) && !fscrypt_has_encryption_key(inode)) @@ -4151,7 +4154,12 @@ static int ext4_block_truncate_page(handle_t *handle, blocksize =3D i_blocksize(inode); length =3D blocksize - (from & (blocksize - 1)); =20 - return ext4_block_zero_page_range(handle, mapping, from, length, NULL); + err =3D ext4_block_zero_page_range(handle, mapping, from, length, + &did_zero); + if (err) + return err; + + return did_zero ? length : 0; } =20 int ext4_zero_partial_blocks(handle_t *handle, struct inode *inode, --=20 2.52.0 From nobody Fri Apr 3 03:01:32 2026 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 996C436E497; Wed, 25 Mar 2026 07:33:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774424029; cv=none; b=qAmHOdaM5PpHS3qO0lRmztDNuY2/4j0I2pZLYzTxEcWGt86rU/a4KobfEOA+ZJktc+AOp3AMHONkD1BNdnlZTxKgcrA4+/SSl457o/7bctY2ND8Q1yl8TGbdb3TPyNYJFNKHv2qa1bzIjr9MuGGh0RkN71KO6CrQh9tPJ4Vdcl8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774424029; c=relaxed/simple; bh=Ah36djBNwxzvj7sWKIwEZeGJjPmY2z1pFBAz+ykL2Is=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=CqKBXZuSxZ5VTBx3CBbL5x0Jz0Q4iYzQ0WjhScWvjdCBf2rPyRUlBBFfcMSY7Zc6sv8ldzQwCmEqr9h/AUjBwZZYyWJWrwqJoWClPFAYf4C8VaSCWTEADadSpL+xCiy+YqwefPvqLMF8EJa+ltmUOp9Z4mBc8OlatU8eXfDCB7U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.177]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTPS id 4fgdt4723PzKHMZ7; Wed, 25 Mar 2026 15:33:00 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.112]) by mail.maildlp.com (Postfix) with ESMTP id 5477A4058C; Wed, 25 Mar 2026 15:33:39 +0800 (CST) Received: from huaweicloud.com (unknown [10.50.85.155]) by APP1 (Coremail) with SMTP id cCh0CgAHC9vFj8NpuR6cCA--.49898S7; Wed, 25 Mar 2026 15:33:39 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ojaswin@linux.ibm.com, ritesh.list@gmail.com, libaokun@linux.alibaba.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, yizhang089@gmail.com, yangerkun@huawei.com, yukuai@fnnas.com Subject: [PATCH v2 03/10] ext4: rename and extend ext4_block_truncate_page() Date: Wed, 25 Mar 2026 15:28:42 +0800 Message-ID: <20260325072850.3997161-4-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260325072850.3997161-1-yi.zhang@huaweicloud.com> References: <20260325072850.3997161-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: cCh0CgAHC9vFj8NpuR6cCA--.49898S7 X-Coremail-Antispam: 1UD129KBjvJXoW3XF18Ar15tF17Gw1xJrW3trb_yoW7tryUp3 4ayw15Cr1j9ryq9F4IgFsrXr4a93WkGF4UWrWfKryrZasrXF1xKF1DtFyrtFWjqrW7Xa1j qFs8KrWjgw17J3DanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUmY14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JrWl82xGYIkIc2 x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F4UJw A2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq3wAS 0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7IYx2 IY67AKxVWUGVWUXwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4UM4x0 Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwACI402YVCY1x02628vn2kIc2 xKxwCY1x0262kKe7AKxVWUtVW8ZwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWU JVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67 kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY 6xIIjxv20xvEc7CjxVAFwI0_Cr0_Gr1UMIIF0xvE42xK8VAvwI8IcIk0rVWUJVWUCwCI42 IY6I8E87Iv67AKxVWUJVW8JwCI42IY6I8E87Iv6xkF7I0E14v26r4j6r4UJbIYCTnIWIev Ja73UjIFyTuYvjfUF3kuDUUUU X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ Content-Type: text/plain; charset="utf-8" From: Zhang Yi Rename ext4_block_truncate_page() to ext4_block_zero_eof() and extend its signature to accept an explicit 'end' offset instead of calculating the block boundary. This helper function now can replace all cases requiring zeroing of the partial EOF block, including the append buffered write paths in ext4_*_write_end(). Signed-off-by: Zhang Yi Reviewed-by: Jan Kara --- fs/ext4/ext4.h | 2 ++ fs/ext4/extents.c | 4 ++-- fs/ext4/inode.c | 43 +++++++++++++++++++++++-------------------- 3 files changed, 27 insertions(+), 22 deletions(-) diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h index 293f698b7042..c62459ef9796 100644 --- a/fs/ext4/ext4.h +++ b/fs/ext4/ext4.h @@ -3099,6 +3099,8 @@ extern int ext4_chunk_trans_blocks(struct inode *, in= t nrblocks); extern int ext4_chunk_trans_extent(struct inode *inode, int nrblocks); extern int ext4_meta_trans_blocks(struct inode *inode, int lblocks, int pextents); +extern int ext4_block_zero_eof(handle_t *handle, struct inode *inode, + loff_t from, loff_t end); extern int ext4_zero_partial_blocks(handle_t *handle, struct inode *inode, loff_t lstart, loff_t lend); extern vm_fault_t ext4_page_mkwrite(struct vm_fault *vmf); diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c index ae3804f36535..a265070c1b79 100644 --- a/fs/ext4/extents.c +++ b/fs/ext4/extents.c @@ -4625,8 +4625,8 @@ static int ext4_alloc_file_blocks(struct file *file, = ext4_lblk_t offset, inode_get_ctime(inode)); if (epos > old_size) { pagecache_isize_extended(inode, old_size, epos); - ext4_zero_partial_blocks(handle, inode, - old_size, epos - old_size); + ext4_block_zero_eof(handle, inode, old_size, + epos); } } ret2 =3D ext4_mark_inode_dirty(handle, inode); diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index f21be26b4642..a7635bbac1a0 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -1458,7 +1458,7 @@ static int ext4_write_end(const struct kiocb *iocb, =20 if (old_size < pos && !verity) { pagecache_isize_extended(inode, old_size, pos); - ext4_zero_partial_blocks(handle, inode, old_size, pos - old_size); + ext4_block_zero_eof(handle, inode, old_size, pos); } /* * Don't mark the inode dirty under folio lock. First, it unnecessarily @@ -1576,7 +1576,7 @@ static int ext4_journalled_write_end(const struct kio= cb *iocb, =20 if (old_size < pos && !verity) { pagecache_isize_extended(inode, old_size, pos); - ext4_zero_partial_blocks(handle, inode, old_size, pos - old_size); + ext4_block_zero_eof(handle, inode, old_size, pos); } =20 if (size_changed) { @@ -3252,7 +3252,7 @@ static int ext4_da_do_write_end(struct address_space = *mapping, if (IS_ERR(handle)) return PTR_ERR(handle); if (zero_len) - ext4_zero_partial_blocks(handle, inode, old_size, zero_len); + ext4_block_zero_eof(handle, inode, old_size, pos); ext4_mark_inode_dirty(handle, inode); ext4_journal_stop(handle); =20 @@ -4132,29 +4132,32 @@ static int ext4_block_zero_page_range(handle_t *han= dle, } =20 /* - * ext4_block_truncate_page() zeroes out a mapping from file offset `from' - * up to the end of the block which corresponds to `from'. - * This required during truncate. We need to physically zero the tail end - * of that block so it doesn't yield old data if the file is later grown. - * Return the zeroed length on success. + * Zero out a mapping from file offset 'from' up to the end of the block + * which corresponds to 'from' or to the given 'end' inside this block. + * This required during truncate up and performing append writes. We need + * to physically zero the tail end of that block so it doesn't yield old + * data if the file is grown. Return the zeroed length on success. */ -static int ext4_block_truncate_page(handle_t *handle, - struct address_space *mapping, loff_t from) +int ext4_block_zero_eof(handle_t *handle, struct inode *inode, + loff_t from, loff_t end) { - unsigned length; - unsigned blocksize; - struct inode *inode =3D mapping->host; + unsigned int blocksize =3D i_blocksize(inode); + unsigned int offset; + loff_t length =3D end - from; bool did_zero =3D false; int err; =20 + offset =3D from & (blocksize - 1); + if (!offset || from >=3D end) + return 0; /* If we are processing an encrypted inode during orphan list handling */ if (IS_ENCRYPTED(inode) && !fscrypt_has_encryption_key(inode)) return 0; =20 - blocksize =3D i_blocksize(inode); - length =3D blocksize - (from & (blocksize - 1)); + if (length > blocksize - offset) + length =3D blocksize - offset; =20 - err =3D ext4_block_zero_page_range(handle, mapping, from, length, + err =3D ext4_block_zero_page_range(handle, inode->i_mapping, from, length, &did_zero); if (err) return err; @@ -4508,7 +4511,6 @@ int ext4_truncate(struct inode *inode) unsigned int credits; int err =3D 0, err2; handle_t *handle; - struct address_space *mapping =3D inode->i_mapping; =20 /* * There is a possibility that we're either freeing the inode @@ -4551,8 +4553,9 @@ int ext4_truncate(struct inode *inode) goto out_trace; } =20 + /* Zero to the end of the block containing i_size */ if (inode->i_size & (inode->i_sb->s_blocksize - 1)) - ext4_block_truncate_page(handle, mapping, inode->i_size); + ext4_block_zero_eof(handle, inode, inode->i_size, LLONG_MAX); =20 /* * We add the inode to the orphan list, so that if this @@ -5929,8 +5932,8 @@ int ext4_setattr(struct mnt_idmap *idmap, struct dent= ry *dentry, inode_set_mtime_to_ts(inode, inode_set_ctime_current(inode)); if (oldsize & (inode->i_sb->s_blocksize - 1)) - ext4_block_truncate_page(handle, - inode->i_mapping, oldsize); + ext4_block_zero_eof(handle, inode, + oldsize, LLONG_MAX); } =20 if (shrink) --=20 2.52.0 From nobody Fri Apr 3 03:01:32 2026 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9954B36E487; Wed, 25 Mar 2026 07:33:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774424030; cv=none; b=JwOo1SLi+YyzaqvgQ3dSgtMnCEXnbKWpo+IHYgkfed/+5euWtrsC7ZgKnGd6bFGlJ3lf7PBk2J9afdMCwAe2l8vmaoDz280nu8CnU346xADtugjiSszB4Qlhoveett8w4mUIt94kADh/ViP/Q4p4uCnW1iiZ1X9thksCvinxSow= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774424030; c=relaxed/simple; bh=SaZsXlqQlKMJtJslUW02LI7hk29DIv9bgzjrJByHt2w=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=VUMF8hYWwdt3IjJpkYaygwLAOws+8WMb6XdTAOyBiITrBNcKuXleXo+lfxh++oWV4rsqicEOu7tYx5CniHz1QDcbHqq9Vxy/tR0YmKtGWVpkFLiqXqI4BVQAaAdA5lLeOtIFQ0OjGP1+3PbvGSipmw2ThSd5Nl7FdgXqAflsaIQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.177]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTPS id 4fgdt51DmbzKHMZJ; Wed, 25 Mar 2026 15:33:01 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.112]) by mail.maildlp.com (Postfix) with ESMTP id 82C014058C; Wed, 25 Mar 2026 15:33:39 +0800 (CST) Received: from huaweicloud.com (unknown [10.50.85.155]) by APP1 (Coremail) with SMTP id cCh0CgAHC9vFj8NpuR6cCA--.49898S8; Wed, 25 Mar 2026 15:33:39 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ojaswin@linux.ibm.com, ritesh.list@gmail.com, libaokun@linux.alibaba.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, yizhang089@gmail.com, yangerkun@huawei.com, yukuai@fnnas.com Subject: [PATCH v2 04/10] ext4: factor out journalled block zeroing range Date: Wed, 25 Mar 2026 15:28:43 +0800 Message-ID: <20260325072850.3997161-5-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260325072850.3997161-1-yi.zhang@huaweicloud.com> References: <20260325072850.3997161-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: cCh0CgAHC9vFj8NpuR6cCA--.49898S8 X-Coremail-Antispam: 1UD129KBjvJXoWxur18Jr4kCryrtryUJw4Utwb_yoW7JF13pr y3K39rZrW7WF9FgF4Sq3W7Xr1a9a4rGrW8WFWxG3savayYqFn3KF1UK3WSvF45tr43G3W0 qF4Yy34j93WDtrDanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUmI14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JF0E3s1l82xGYI kIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2 z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F 4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq 3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7 IYx2IY67AKxVWUGVWUXwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4U M4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwACI402YVCY1x02628vn2 kIc2xKxwCY1x0262kKe7AKxVWUtVW8ZwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkE bVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67 AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI 42IY6xIIjxv20xvEc7CjxVAFwI0_Cr0_Gr1UMIIF0xvE42xK8VAvwI8IcIk0rVWUJVWUCw CI42IY6I8E87Iv67AKxVWUJVW8JwCI42IY6I8E87Iv6xkF7I0E14v26r4j6r4UJbIYCTnI WIevJa73UjIFyTuYvjfUriihUUUUU X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ Content-Type: text/plain; charset="utf-8" From: Zhang Yi Refactor __ext4_block_zero_page_range() by separating the block zeroing operations for ordered data mode and journal data mode into two distinct functions: - ext4_block_do_zero_range(): handles non-journal data mode with ordered data support - ext4_block_journalled_zero_range(): handles journal data mode Also extract a common helper, ext4_load_tail_bh(), to handle buffer head and folio retrieval, along with the associated error handling. This prepares for converting the partial block zero range to the iomap infrastructure. Signed-off-by: Zhang Yi Reviewed-by: Jan Kara --- fs/ext4/inode.c | 98 ++++++++++++++++++++++++++++++++++--------------- 1 file changed, 69 insertions(+), 29 deletions(-) diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index a7635bbac1a0..3ccba708895d 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -4002,13 +4002,11 @@ void ext4_set_aops(struct inode *inode) * ext4_punch_hole, etc) which needs to be properly zeroed out. Otherwise a * racing writeback can come later and flush the stale pagecache to disk. */ -static int __ext4_block_zero_page_range(handle_t *handle, - struct address_space *mapping, loff_t from, loff_t length, - bool *did_zero) +static struct buffer_head *ext4_load_tail_bh(struct inode *inode, loff_t f= rom) { unsigned int offset, blocksize, pos; ext4_lblk_t iblock; - struct inode *inode =3D mapping->host; + struct address_space *mapping =3D inode->i_mapping; struct buffer_head *bh; struct folio *folio; int err =3D 0; @@ -4017,7 +4015,7 @@ static int __ext4_block_zero_page_range(handle_t *han= dle, FGP_LOCK | FGP_ACCESSED | FGP_CREAT, mapping_gfp_constraint(mapping, ~__GFP_FS)); if (IS_ERR(folio)) - return PTR_ERR(folio); + return ERR_CAST(folio); =20 blocksize =3D inode->i_sb->s_blocksize; =20 @@ -4069,33 +4067,73 @@ static int __ext4_block_zero_page_range(handle_t *h= andle, } } } - if (ext4_should_journal_data(inode)) { - BUFFER_TRACE(bh, "get write access"); - err =3D ext4_journal_get_write_access(handle, inode->i_sb, bh, - EXT4_JTR_NONE); - if (err) - goto unlock; - } - folio_zero_range(folio, offset, length); + return bh; + +unlock: + folio_unlock(folio); + folio_put(folio); + return err ? ERR_PTR(err) : NULL; +} + +static int ext4_block_do_zero_range(handle_t *handle, struct inode *inode, + loff_t from, loff_t length, bool *did_zero) +{ + struct buffer_head *bh; + struct folio *folio; + int err =3D 0; + + bh =3D ext4_load_tail_bh(inode, from); + if (IS_ERR_OR_NULL(bh)) + return PTR_ERR_OR_ZERO(bh); + + folio =3D bh->b_folio; + folio_zero_range(folio, offset_in_folio(folio, from), length); BUFFER_TRACE(bh, "zeroed end of block"); =20 - if (ext4_should_journal_data(inode)) { - err =3D ext4_dirty_journalled_data(handle, bh); - } else { - mark_buffer_dirty(bh); - /* - * Only the written block requires ordered data to prevent - * exposing stale data. - */ - if (!buffer_unwritten(bh) && !buffer_delay(bh) && - ext4_should_order_data(inode)) - err =3D ext4_jbd2_inode_add_write(handle, inode, from, - length); - } + mark_buffer_dirty(bh); + /* + * Only the written block requires ordered data to prevent exposing + * stale data. + */ + if (ext4_should_order_data(inode) && + !buffer_unwritten(bh) && !buffer_delay(bh)) + err =3D ext4_jbd2_inode_add_write(handle, inode, from, length); if (!err && did_zero) *did_zero =3D true; =20 -unlock: + folio_unlock(folio); + folio_put(folio); + return err; +} + +static int ext4_block_journalled_zero_range(handle_t *handle, + struct inode *inode, loff_t from, loff_t length, bool *did_zero) +{ + struct buffer_head *bh; + struct folio *folio; + int err; + + bh =3D ext4_load_tail_bh(inode, from); + if (IS_ERR_OR_NULL(bh)) + return PTR_ERR_OR_ZERO(bh); + folio =3D bh->b_folio; + + BUFFER_TRACE(bh, "get write access"); + err =3D ext4_journal_get_write_access(handle, inode->i_sb, bh, + EXT4_JTR_NONE); + if (err) + goto out; + + folio_zero_range(folio, offset_in_folio(folio, from), length); + BUFFER_TRACE(bh, "zeroed end of block"); + + err =3D ext4_dirty_journalled_data(handle, bh); + if (err) + goto out; + + if (did_zero) + *did_zero =3D true; +out: folio_unlock(folio); folio_put(folio); return err; @@ -4126,9 +4164,11 @@ static int ext4_block_zero_page_range(handle_t *hand= le, if (IS_DAX(inode)) { return dax_zero_range(inode, from, length, did_zero, &ext4_iomap_ops); + } else if (ext4_should_journal_data(inode)) { + return ext4_block_journalled_zero_range(handle, inode, from, + length, did_zero); } - return __ext4_block_zero_page_range(handle, mapping, from, length, - did_zero); + return ext4_block_do_zero_range(handle, inode, from, length, did_zero); } =20 /* --=20 2.52.0 From nobody Fri Apr 3 03:01:32 2026 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E5146242D60; Wed, 25 Mar 2026 07:33:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774424024; cv=none; b=D+JqUwQpDyPwC2PRQB0QWBDSR6GHbbouQr9AxMOKlNxuaodFR6HvqB+MYmodJMNE7zUbrMBtTFZkPFOwf2OUGnoJgQtPmBhlKnggZw2L1zrewsyr1EHXb135GAjKRCOIGyJVW02jt05sDSkoQyNRtRekn8J22EbnW5b1v2q3Pt0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774424024; c=relaxed/simple; bh=ZBEv5WaFYcj0GBow/xdnVdIAF/hi6rWaAhCuhDiQEvE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=VQSqmi5O3bS+537HE5I8ocIzZEjSNClj6bzMr9TY8zyyu3f+00Pb9gvcEMgGfDK05wWRho1++FKVFkJLpkm/LpN1tmdRiU2AC7WaRfnMMZdLtxBpOWfAMhUvqmdZX++YL3JXAhc/OLq/87pEflZ9VHXmj0OMZNbKJM+1Yd//NCA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.198]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTPS id 4fgdtd4hvxzYQtxw; Wed, 25 Mar 2026 15:33:29 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.112]) by mail.maildlp.com (Postfix) with ESMTP id 9E40440539; Wed, 25 Mar 2026 15:33:39 +0800 (CST) Received: from huaweicloud.com (unknown [10.50.85.155]) by APP1 (Coremail) with SMTP id cCh0CgAHC9vFj8NpuR6cCA--.49898S9; Wed, 25 Mar 2026 15:33:39 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ojaswin@linux.ibm.com, ritesh.list@gmail.com, libaokun@linux.alibaba.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, yizhang089@gmail.com, yangerkun@huawei.com, yukuai@fnnas.com Subject: [PATCH v2 05/10] ext4: rename ext4_block_zero_page_range() to ext4_block_zero_range() Date: Wed, 25 Mar 2026 15:28:44 +0800 Message-ID: <20260325072850.3997161-6-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260325072850.3997161-1-yi.zhang@huaweicloud.com> References: <20260325072850.3997161-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: cCh0CgAHC9vFj8NpuR6cCA--.49898S9 X-Coremail-Antispam: 1UD129KBjvJXoWxAw1fAr4kZFyfZr4kGw15Jwb_yoW5AFy8pr y3tw15ur47Wryq9F1xWF12qr1Ik3Z3GFW8Wry3GryFv3yxXas3tF98K3Z5XF4jg3yxXa40 qF4Yyry2gw17AaDanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUmI14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JF0E3s1l82xGYI kIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2 z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F 4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq 3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7 IYx2IY67AKxVWUGVWUXwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4U M4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwACI402YVCY1x02628vn2 kIc2xKxwCY1x0262kKe7AKxVWUtVW8ZwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkE bVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67 AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUCVW8JwCI 42IY6xIIjxv20xvEc7CjxVAFwI0_Cr0_Gr1UMIIF0xvE42xK8VAvwI8IcIk0rVWUJVWUCw CI42IY6I8E87Iv67AKxVWUJVW8JwCI42IY6I8E87Iv6xkF7I0E14v26r4j6r4UJbIYCTnI WIevJa73UjIFyTuYvjfUriihUUUUU X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ Content-Type: text/plain; charset="utf-8" From: Zhang Yi Rename ext4_block_zero_page_range() to ext4_block_zero_range() since the "page" naming is no longer appropriate for current context. Also change its signature to take an inode pointer instead of an address_space. This aligns with the caller ext4_block_zero_eof() and ext4_zero_partial_blocks(). Signed-off-by: Zhang Yi Reviewed-by: Jan Kara --- fs/ext4/inode.c | 24 ++++++++++-------------- 1 file changed, 10 insertions(+), 14 deletions(-) diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 3ccba708895d..3c3c07fd00ba 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -4146,11 +4146,9 @@ static int ext4_block_journalled_zero_range(handle_t= *handle, * the end of the block it will be shortened to end of the block * that corresponds to 'from' */ -static int ext4_block_zero_page_range(handle_t *handle, - struct address_space *mapping, loff_t from, loff_t length, - bool *did_zero) +static int ext4_block_zero_range(handle_t *handle, struct inode *inode, + loff_t from, loff_t length, bool *did_zero) { - struct inode *inode =3D mapping->host; unsigned blocksize =3D inode->i_sb->s_blocksize; unsigned int max =3D blocksize - (from & (blocksize - 1)); =20 @@ -4197,8 +4195,7 @@ int ext4_block_zero_eof(handle_t *handle, struct inod= e *inode, if (length > blocksize - offset) length =3D blocksize - offset; =20 - err =3D ext4_block_zero_page_range(handle, inode->i_mapping, from, length, - &did_zero); + err =3D ext4_block_zero_range(handle, inode, from, length, &did_zero); if (err) return err; =20 @@ -4209,7 +4206,6 @@ int ext4_zero_partial_blocks(handle_t *handle, struct= inode *inode, loff_t lstart, loff_t length) { struct super_block *sb =3D inode->i_sb; - struct address_space *mapping =3D inode->i_mapping; unsigned partial_start, partial_end; ext4_fsblk_t start, end; loff_t byte_end =3D (lstart + length - 1); @@ -4224,22 +4220,22 @@ int ext4_zero_partial_blocks(handle_t *handle, stru= ct inode *inode, /* Handle partial zero within the single block */ if (start =3D=3D end && (partial_start || (partial_end !=3D sb->s_blocksize - 1))) { - err =3D ext4_block_zero_page_range(handle, mapping, - lstart, length, NULL); + err =3D ext4_block_zero_range(handle, inode, lstart, + length, NULL); return err; } /* Handle partial zero out on the start of the range */ if (partial_start) { - err =3D ext4_block_zero_page_range(handle, mapping, lstart, - sb->s_blocksize, NULL); + err =3D ext4_block_zero_range(handle, inode, lstart, + sb->s_blocksize, NULL); if (err) return err; } /* Handle partial zero out on the end of the range */ if (partial_end !=3D sb->s_blocksize - 1) - err =3D ext4_block_zero_page_range(handle, mapping, - byte_end - partial_end, - partial_end + 1, NULL); + err =3D ext4_block_zero_range(handle, inode, + byte_end - partial_end, + partial_end + 1, NULL); return err; } =20 --=20 2.52.0 From nobody Fri Apr 3 03:01:32 2026 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E51AA29DB9A; Wed, 25 Mar 2026 07:33:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774424025; cv=none; b=ouY/biHFgBb+wJxwe9WFIVBP7hW/jtbHxjVsDLxFSZNpDncsUYuLlu+jXcZKgtlJo9kCci6QD6aA3jE1dlcFCrnj8AT5pEejFN6vl9YaSugciOdRSLXkui2+mmZCv2UjHugyNEE6CHkG/Hw9b1QbeokNBlaFRiD5paL7m77hMdU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774424025; c=relaxed/simple; bh=qhFN2bCOzBgZ9v8bC8E3wmoJrGkmrk/3g2zXJggQBsg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=h1fti3g/mDPTsx6D7L5WxHpOFwaYr1aIsTfrvJvlMSPICG+E3NANXQeoXxtXx7mgTRqTh4rdmhHuIBpfCdRdV+4HqpVZeYbLt3wuRTAlpLW4f/NdOws0a+jqRXBrUVOvBDxvBuudRAlekn+S3LwWoy/lPD6J8q/gZBj8KDifGqo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.198]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTPS id 4fgdtd4xx2zYQtyD; Wed, 25 Mar 2026 15:33:29 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.112]) by mail.maildlp.com (Postfix) with ESMTP id A855C40575; Wed, 25 Mar 2026 15:33:39 +0800 (CST) Received: from huaweicloud.com (unknown [10.50.85.155]) by APP1 (Coremail) with SMTP id cCh0CgAHC9vFj8NpuR6cCA--.49898S10; Wed, 25 Mar 2026 15:33:39 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ojaswin@linux.ibm.com, ritesh.list@gmail.com, libaokun@linux.alibaba.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, yizhang089@gmail.com, yangerkun@huawei.com, yukuai@fnnas.com Subject: [PATCH v2 06/10] ext4: move ordered data handling out of ext4_block_do_zero_range() Date: Wed, 25 Mar 2026 15:28:45 +0800 Message-ID: <20260325072850.3997161-7-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260325072850.3997161-1-yi.zhang@huaweicloud.com> References: <20260325072850.3997161-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: cCh0CgAHC9vFj8NpuR6cCA--.49898S10 X-Coremail-Antispam: 1UD129KBjvJXoW3JF1UCFy8ZrW5Cr1ktr1UKFg_yoW7tF1xpF y5K345Cr47Wr9F9Fs7AF17XF1ak3WfGFW8WFW7Gr9Yv3yaqwn7KFyUKryFvF4Yq3y3W3W0 qF4Ut34jg3W7AaDanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUmI14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JF0E3s1l82xGYI kIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2 z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F 4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq 3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7 IYx2IY67AKxVWUGVWUXwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4U M4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwACI402YVCY1x02628vn2 kIc2xKxwCY1x0262kKe7AKxVWUtVW8ZwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkE bVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67 AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUCVW8JwCI 42IY6xIIjxv20xvEc7CjxVAFwI0_Cr0_Gr1UMIIF0xvE42xK8VAvwI8IcIk0rVWUJVWUCw CI42IY6I8E87Iv67AKxVWUJVW8JwCI42IY6I8E87Iv6xkF7I0E14v26r4j6r4UJbIYCTnI WIevJa73UjIFyTuYvjfUriihUUUUU X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ Content-Type: text/plain; charset="utf-8" From: Zhang Yi Remove the handle parameter from ext4_block_do_zero_range() and move the ordered data handling to ext4_block_zero_eof(). This is necessary for truncate up and append writes across a range extending beyond EOF. The ordered data must be committed before updating i_disksize to prevent exposing stale on-disk data from concurrent post-EOF mmap writes during previous folio writeback or in case of system crash during append writes. This is unnecessary for partial block hole punching because the entire punch operation does not provide atomicity guarantees and can already expose intermediate results in case of crash. Hole punching can only ever expose data that was there before the punch but missed zeroing during append / truncate could expose data that was not visible in the file before the operation. Since ordered data handling is no longer performed inside ext4_zero_partial_blocks(), ext4_punch_hole() no longer needs to attach jinode. This is prepared for the conversion to the iomap infrastructure, which does not use ordered data mode while zeroing post-EOF partial blocks. Signed-off-by: Zhang Yi Reviewed-by: Jan Kara --- fs/ext4/inode.c | 58 ++++++++++++++++++++++++------------------------- 1 file changed, 29 insertions(+), 29 deletions(-) diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 3c3c07fd00ba..84dd3140793d 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -4075,12 +4075,12 @@ static struct buffer_head *ext4_load_tail_bh(struct= inode *inode, loff_t from) return err ? ERR_PTR(err) : NULL; } =20 -static int ext4_block_do_zero_range(handle_t *handle, struct inode *inode, - loff_t from, loff_t length, bool *did_zero) +static int ext4_block_do_zero_range(struct inode *inode, loff_t from, + loff_t length, bool *did_zero, + bool *zero_written) { struct buffer_head *bh; struct folio *folio; - int err =3D 0; =20 bh =3D ext4_load_tail_bh(inode, from); if (IS_ERR_OR_NULL(bh)) @@ -4091,19 +4091,14 @@ static int ext4_block_do_zero_range(handle_t *handl= e, struct inode *inode, BUFFER_TRACE(bh, "zeroed end of block"); =20 mark_buffer_dirty(bh); - /* - * Only the written block requires ordered data to prevent exposing - * stale data. - */ - if (ext4_should_order_data(inode) && - !buffer_unwritten(bh) && !buffer_delay(bh)) - err =3D ext4_jbd2_inode_add_write(handle, inode, from, length); - if (!err && did_zero) + if (did_zero) *did_zero =3D true; + if (zero_written && !buffer_unwritten(bh) && !buffer_delay(bh)) + *zero_written =3D true; =20 folio_unlock(folio); folio_put(folio); - return err; + return 0; } =20 static int ext4_block_journalled_zero_range(handle_t *handle, @@ -4147,7 +4142,8 @@ static int ext4_block_journalled_zero_range(handle_t = *handle, * that corresponds to 'from' */ static int ext4_block_zero_range(handle_t *handle, struct inode *inode, - loff_t from, loff_t length, bool *did_zero) + loff_t from, loff_t length, bool *did_zero, + bool *zero_written) { unsigned blocksize =3D inode->i_sb->s_blocksize; unsigned int max =3D blocksize - (from & (blocksize - 1)); @@ -4166,7 +4162,8 @@ static int ext4_block_zero_range(handle_t *handle, st= ruct inode *inode, return ext4_block_journalled_zero_range(handle, inode, from, length, did_zero); } - return ext4_block_do_zero_range(handle, inode, from, length, did_zero); + return ext4_block_do_zero_range(inode, from, length, did_zero, + zero_written); } =20 /* @@ -4183,6 +4180,7 @@ int ext4_block_zero_eof(handle_t *handle, struct inod= e *inode, unsigned int offset; loff_t length =3D end - from; bool did_zero =3D false; + bool zero_written =3D false; int err; =20 offset =3D from & (blocksize - 1); @@ -4195,9 +4193,22 @@ int ext4_block_zero_eof(handle_t *handle, struct ino= de *inode, if (length > blocksize - offset) length =3D blocksize - offset; =20 - err =3D ext4_block_zero_range(handle, inode, from, length, &did_zero); + err =3D ext4_block_zero_range(handle, inode, from, length, + &did_zero, &zero_written); if (err) return err; + /* + * It's necessary to order zeroed data before update i_disksize when + * truncating up or performing an append write, because there might be + * exposing stale on-disk data which may caused by concurrent post-EOF + * mmap write during folio writeback. + */ + if (ext4_should_order_data(inode) && + did_zero && zero_written && !IS_DAX(inode)) { + err =3D ext4_jbd2_inode_add_write(handle, inode, from, length); + if (err) + return err; + } =20 return did_zero ? length : 0; } @@ -4221,13 +4232,13 @@ int ext4_zero_partial_blocks(handle_t *handle, stru= ct inode *inode, if (start =3D=3D end && (partial_start || (partial_end !=3D sb->s_blocksize - 1))) { err =3D ext4_block_zero_range(handle, inode, lstart, - length, NULL); + length, NULL, NULL); return err; } /* Handle partial zero out on the start of the range */ if (partial_start) { err =3D ext4_block_zero_range(handle, inode, lstart, - sb->s_blocksize, NULL); + sb->s_blocksize, NULL, NULL); if (err) return err; } @@ -4235,7 +4246,7 @@ int ext4_zero_partial_blocks(handle_t *handle, struct= inode *inode, if (partial_end !=3D sb->s_blocksize - 1) err =3D ext4_block_zero_range(handle, inode, byte_end - partial_end, - partial_end + 1, NULL); + partial_end + 1, NULL, NULL); return err; } =20 @@ -4410,17 +4421,6 @@ int ext4_punch_hole(struct file *file, loff_t offset= , loff_t length) end =3D max_end; length =3D end - offset; =20 - /* - * Attach jinode to inode for jbd2 if we do any zeroing of partial - * block. - */ - if (!IS_ALIGNED(offset | end, sb->s_blocksize)) { - ret =3D ext4_inode_attach_jinode(inode); - if (ret < 0) - return ret; - } - - ret =3D ext4_update_disksize_before_punch(inode, offset, length); if (ret) return ret; --=20 2.52.0 From nobody Fri Apr 3 03:01:32 2026 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E52202C2374; Wed, 25 Mar 2026 07:33:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774424025; cv=none; b=XogeufImtalRdGncIO6gfZZYJqqCVJCM5S7VfaHQip13V7Joztu7IcJBLt+YdEQ+T9TCQetqtXDD5AeagZ4W5BWSiiqVfM4IbofGUgeySUQ1jhuIfvsBrNJisjsTvhf42USUNcwecsQJGZ7XAfVIQ5WRkiC8gzwi7njDCf/t+qg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774424025; c=relaxed/simple; bh=w3xPNGcGjGvNWYd7QzfZocPhtAbQbTCu7C9kRUOuWDw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qp6/uymWPM/x1DY4oKblAe1B/q5LZ6rExxElRj9pOXmg1k3PxUZSPGHfGPkua+TT2jPjHpHFF904QuAYfcFDXAf7EqU9BULXp/oG9YHFCoFuAFD/VJhbzDqPNSz3R70st87jJSG8A0fdQ1opK9uIywboiQ+LpXnoYiCXzxbe6gs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.198]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTPS id 4fgdtd5RG8zYQtyH; Wed, 25 Mar 2026 15:33:29 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.112]) by mail.maildlp.com (Postfix) with ESMTP id B8C3C40577; Wed, 25 Mar 2026 15:33:39 +0800 (CST) Received: from huaweicloud.com (unknown [10.50.85.155]) by APP1 (Coremail) with SMTP id cCh0CgAHC9vFj8NpuR6cCA--.49898S11; Wed, 25 Mar 2026 15:33:39 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ojaswin@linux.ibm.com, ritesh.list@gmail.com, libaokun@linux.alibaba.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, yizhang089@gmail.com, yangerkun@huawei.com, yukuai@fnnas.com Subject: [PATCH v2 07/10] ext4: remove handle parameters from zero partial block functions Date: Wed, 25 Mar 2026 15:28:46 +0800 Message-ID: <20260325072850.3997161-8-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260325072850.3997161-1-yi.zhang@huaweicloud.com> References: <20260325072850.3997161-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: cCh0CgAHC9vFj8NpuR6cCA--.49898S11 X-Coremail-Antispam: 1UD129KBjvJXoW3Xw1rKw47Kr4DGr4DtF15twb_yoWfZw4rp3 4UJw1UCr43uFyv9F4IkF47Zr1a93WxGF48WryfGryFvay8Xw1xKF1DKF1FyF4jqr47Wa10 vF4Yy34jg3WUJ3DanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUmS14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JF0E3s1l82xGYI kIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2 z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F 4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq 3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7 IYx2IY67AKxVWUGVWUXwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4U M4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwACI402YVCY1x02628vn2 kIc2xKxwCY1x0262kKe7AKxVWUtVW8ZwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkE bVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67 AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUCVW8JwCI 42IY6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F4UJwCI42IY6xAIw20EY4v20xvaj40_Jr0_JF 4lIxAIcVC2z280aVAFwI0_Jr0_Gr1lIxAIcVC2z280aVCY1x0267AKxVW8Jr0_Cr1UYxBI daVFxhVjvjDU0xZFpf9x0JUWMKtUUUUU= X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ Content-Type: text/plain; charset="utf-8" From: Zhang Yi Only journal data mode requires an active journal handle when zeroing partial blocks. Stop passing handle_t *handle to ext4_zero_partial_blocks() and related functions, and make ext4_block_journalled_zero_range() start a handle independently. This change has no practical impact now because all callers invoke these functions within the context of an active handle. It prepares for moving ext4_block_zero_eof() out of an active handle in the next patch, which is a prerequisite for converting block zero range operations to iomap infrastructure. Signed-off-by: Zhang Yi Reviewed-by: Jan Kara --- fs/ext4/ext4.h | 7 +++--- fs/ext4/extents.c | 5 ++-- fs/ext4/inode.c | 62 ++++++++++++++++++++++++++++------------------- 3 files changed, 42 insertions(+), 32 deletions(-) diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h index c62459ef9796..20545a9523e9 100644 --- a/fs/ext4/ext4.h +++ b/fs/ext4/ext4.h @@ -3099,10 +3099,9 @@ extern int ext4_chunk_trans_blocks(struct inode *, i= nt nrblocks); extern int ext4_chunk_trans_extent(struct inode *inode, int nrblocks); extern int ext4_meta_trans_blocks(struct inode *inode, int lblocks, int pextents); -extern int ext4_block_zero_eof(handle_t *handle, struct inode *inode, - loff_t from, loff_t end); -extern int ext4_zero_partial_blocks(handle_t *handle, struct inode *inode, - loff_t lstart, loff_t lend); +extern int ext4_block_zero_eof(struct inode *inode, loff_t from, loff_t en= d); +extern int ext4_zero_partial_blocks(struct inode *inode, loff_t lstart, + loff_t lend); extern vm_fault_t ext4_page_mkwrite(struct vm_fault *vmf); extern qsize_t *ext4_get_reserved_space(struct inode *inode); extern int ext4_get_projid(struct inode *inode, kprojid_t *projid); diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c index a265070c1b79..753a0f3418a4 100644 --- a/fs/ext4/extents.c +++ b/fs/ext4/extents.c @@ -4625,8 +4625,7 @@ static int ext4_alloc_file_blocks(struct file *file, = ext4_lblk_t offset, inode_get_ctime(inode)); if (epos > old_size) { pagecache_isize_extended(inode, old_size, epos); - ext4_block_zero_eof(handle, inode, old_size, - epos); + ext4_block_zero_eof(inode, old_size, epos); } } ret2 =3D ext4_mark_inode_dirty(handle, inode); @@ -4744,7 +4743,7 @@ static long ext4_zero_range(struct file *file, loff_t= offset, } =20 /* Zero out partial block at the edges of the range */ - ret =3D ext4_zero_partial_blocks(handle, inode, offset, len); + ret =3D ext4_zero_partial_blocks(inode, offset, len); if (ret) goto out_handle; =20 diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 84dd3140793d..f68b2afdcfcb 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -1458,7 +1458,7 @@ static int ext4_write_end(const struct kiocb *iocb, =20 if (old_size < pos && !verity) { pagecache_isize_extended(inode, old_size, pos); - ext4_block_zero_eof(handle, inode, old_size, pos); + ext4_block_zero_eof(inode, old_size, pos); } /* * Don't mark the inode dirty under folio lock. First, it unnecessarily @@ -1576,7 +1576,7 @@ static int ext4_journalled_write_end(const struct kio= cb *iocb, =20 if (old_size < pos && !verity) { pagecache_isize_extended(inode, old_size, pos); - ext4_block_zero_eof(handle, inode, old_size, pos); + ext4_block_zero_eof(inode, old_size, pos); } =20 if (size_changed) { @@ -3252,7 +3252,7 @@ static int ext4_da_do_write_end(struct address_space = *mapping, if (IS_ERR(handle)) return PTR_ERR(handle); if (zero_len) - ext4_block_zero_eof(handle, inode, old_size, pos); + ext4_block_zero_eof(inode, old_size, pos); ext4_mark_inode_dirty(handle, inode); ext4_journal_stop(handle); =20 @@ -4101,16 +4101,23 @@ static int ext4_block_do_zero_range(struct inode *i= node, loff_t from, return 0; } =20 -static int ext4_block_journalled_zero_range(handle_t *handle, - struct inode *inode, loff_t from, loff_t length, bool *did_zero) +static int ext4_block_journalled_zero_range(struct inode *inode, loff_t fr= om, + loff_t length, bool *did_zero) { struct buffer_head *bh; struct folio *folio; + handle_t *handle; int err; =20 + handle =3D ext4_journal_start(inode, EXT4_HT_MISC, 1); + if (IS_ERR(handle)) + return PTR_ERR(handle); + bh =3D ext4_load_tail_bh(inode, from); - if (IS_ERR_OR_NULL(bh)) - return PTR_ERR_OR_ZERO(bh); + if (IS_ERR_OR_NULL(bh)) { + err =3D PTR_ERR_OR_ZERO(bh); + goto out_handle; + } folio =3D bh->b_folio; =20 BUFFER_TRACE(bh, "get write access"); @@ -4131,6 +4138,8 @@ static int ext4_block_journalled_zero_range(handle_t = *handle, out: folio_unlock(folio); folio_put(folio); +out_handle: + ext4_journal_stop(handle); return err; } =20 @@ -4141,7 +4150,7 @@ static int ext4_block_journalled_zero_range(handle_t = *handle, * the end of the block it will be shortened to end of the block * that corresponds to 'from' */ -static int ext4_block_zero_range(handle_t *handle, struct inode *inode, +static int ext4_block_zero_range(struct inode *inode, loff_t from, loff_t length, bool *did_zero, bool *zero_written) { @@ -4159,8 +4168,8 @@ static int ext4_block_zero_range(handle_t *handle, st= ruct inode *inode, return dax_zero_range(inode, from, length, did_zero, &ext4_iomap_ops); } else if (ext4_should_journal_data(inode)) { - return ext4_block_journalled_zero_range(handle, inode, from, - length, did_zero); + return ext4_block_journalled_zero_range(inode, from, length, + did_zero); } return ext4_block_do_zero_range(inode, from, length, did_zero, zero_written); @@ -4173,8 +4182,7 @@ static int ext4_block_zero_range(handle_t *handle, st= ruct inode *inode, * to physically zero the tail end of that block so it doesn't yield old * data if the file is grown. Return the zeroed length on success. */ -int ext4_block_zero_eof(handle_t *handle, struct inode *inode, - loff_t from, loff_t end) +int ext4_block_zero_eof(struct inode *inode, loff_t from, loff_t end) { unsigned int blocksize =3D i_blocksize(inode); unsigned int offset; @@ -4193,7 +4201,7 @@ int ext4_block_zero_eof(handle_t *handle, struct inod= e *inode, if (length > blocksize - offset) length =3D blocksize - offset; =20 - err =3D ext4_block_zero_range(handle, inode, from, length, + err =3D ext4_block_zero_range(inode, from, length, &did_zero, &zero_written); if (err) return err; @@ -4205,7 +4213,14 @@ int ext4_block_zero_eof(handle_t *handle, struct ino= de *inode, */ if (ext4_should_order_data(inode) && did_zero && zero_written && !IS_DAX(inode)) { + handle_t *handle; + + handle =3D ext4_journal_start(inode, EXT4_HT_MISC, 1); + if (IS_ERR(handle)) + return PTR_ERR(handle); + err =3D ext4_jbd2_inode_add_write(handle, inode, from, length); + ext4_journal_stop(handle); if (err) return err; } @@ -4213,8 +4228,7 @@ int ext4_block_zero_eof(handle_t *handle, struct inod= e *inode, return did_zero ? length : 0; } =20 -int ext4_zero_partial_blocks(handle_t *handle, struct inode *inode, - loff_t lstart, loff_t length) +int ext4_zero_partial_blocks(struct inode *inode, loff_t lstart, loff_t le= ngth) { struct super_block *sb =3D inode->i_sb; unsigned partial_start, partial_end; @@ -4231,21 +4245,19 @@ int ext4_zero_partial_blocks(handle_t *handle, stru= ct inode *inode, /* Handle partial zero within the single block */ if (start =3D=3D end && (partial_start || (partial_end !=3D sb->s_blocksize - 1))) { - err =3D ext4_block_zero_range(handle, inode, lstart, - length, NULL, NULL); + err =3D ext4_block_zero_range(inode, lstart, length, NULL, NULL); return err; } /* Handle partial zero out on the start of the range */ if (partial_start) { - err =3D ext4_block_zero_range(handle, inode, lstart, - sb->s_blocksize, NULL, NULL); + err =3D ext4_block_zero_range(inode, lstart, sb->s_blocksize, + NULL, NULL); if (err) return err; } /* Handle partial zero out on the end of the range */ if (partial_end !=3D sb->s_blocksize - 1) - err =3D ext4_block_zero_range(handle, inode, - byte_end - partial_end, + err =3D ext4_block_zero_range(inode, byte_end - partial_end, partial_end + 1, NULL, NULL); return err; } @@ -4441,7 +4453,7 @@ int ext4_punch_hole(struct file *file, loff_t offset,= loff_t length) return ret; } =20 - ret =3D ext4_zero_partial_blocks(handle, inode, offset, length); + ret =3D ext4_zero_partial_blocks(inode, offset, length); if (ret) goto out_handle; =20 @@ -4591,7 +4603,7 @@ int ext4_truncate(struct inode *inode) =20 /* Zero to the end of the block containing i_size */ if (inode->i_size & (inode->i_sb->s_blocksize - 1)) - ext4_block_zero_eof(handle, inode, inode->i_size, LLONG_MAX); + ext4_block_zero_eof(inode, inode->i_size, LLONG_MAX); =20 /* * We add the inode to the orphan list, so that if this @@ -5968,8 +5980,8 @@ int ext4_setattr(struct mnt_idmap *idmap, struct dent= ry *dentry, inode_set_mtime_to_ts(inode, inode_set_ctime_current(inode)); if (oldsize & (inode->i_sb->s_blocksize - 1)) - ext4_block_zero_eof(handle, inode, - oldsize, LLONG_MAX); + ext4_block_zero_eof(inode, oldsize, + LLONG_MAX); } =20 if (shrink) --=20 2.52.0 From nobody Fri Apr 3 03:01:32 2026 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3FECC351C24; Wed, 25 Mar 2026 07:33:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774424029; cv=none; b=DZmAXdZRpqWnHnDJ1WFUuAA/FOwaCx/JGhUGA/BVAZ8RZr+sstPDaksQws5Mgkd8ILUF0H/kxzI9X5SaDNYwFyq8tSphW6piEZG8yqw5zvCQanUZs79DYyRZKQJffjPbIV5VBqWV6H1LfpQfv6XRvoSFLUV50E+Hv+P6W8k66Yw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774424029; c=relaxed/simple; bh=XmFUjlxOZHR+uzDYaPjlfunzjAttkhEg3O4QZ+98J+A=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=k4hprNNqfktlRT4ki7qsuuZm/KchTOOzQ6FH5GeT+gJ9zbpazBDCt0+ESzfTjq+kXP7vSjAdvxbxxZN4QVpjlVVUl+vmww+VrM++bj/H/+u0jMMyverNJpPdoqngBoRk3KCL8wDNLFhappgQvk2fH9Ix0RRaMs813fG7bsAjhu0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.177]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTPS id 4fgdt53V7dzKHMZg; Wed, 25 Mar 2026 15:33:01 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.112]) by mail.maildlp.com (Postfix) with ESMTP id CFA5D40539; Wed, 25 Mar 2026 15:33:39 +0800 (CST) Received: from huaweicloud.com (unknown [10.50.85.155]) by APP1 (Coremail) with SMTP id cCh0CgAHC9vFj8NpuR6cCA--.49898S12; Wed, 25 Mar 2026 15:33:39 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ojaswin@linux.ibm.com, ritesh.list@gmail.com, libaokun@linux.alibaba.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, yizhang089@gmail.com, yangerkun@huawei.com, yukuai@fnnas.com Subject: [PATCH v2 08/10] ext4: pass allocate range as loff_t to ext4_alloc_file_blocks() Date: Wed, 25 Mar 2026 15:28:47 +0800 Message-ID: <20260325072850.3997161-9-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260325072850.3997161-1-yi.zhang@huaweicloud.com> References: <20260325072850.3997161-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: cCh0CgAHC9vFj8NpuR6cCA--.49898S12 X-Coremail-Antispam: 1UD129KBjvJXoWxtrWfuF1kCryUKFyUZrW5Wrg_yoW7tFWrpF Z8Zr15GF4fWFyv9w40kwsrXr1fK3ZrKrWUXryagryFqa4DtF1xtan0yFW0gFySgrZ7Zrs0 vF4Ykry7Ga1UG3DanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUmS14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JF0E3s1l82xGYI kIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2 z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F 4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq 3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7 IYx2IY67AKxVWUGVWUXwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4U M4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwACI402YVCY1x02628vn2 kIc2xKxwCY1x0262kKe7AKxVWUtVW8ZwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkE bVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67 AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUCVW8JwCI 42IY6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F4UJwCI42IY6xAIw20EY4v20xvaj40_Jr0_JF 4lIxAIcVC2z280aVAFwI0_Jr0_Gr1lIxAIcVC2z280aVCY1x0267AKxVW8Jr0_Cr1UYxBI daVFxhVjvjDU0xZFpf9x0JUWMKtUUUUU= X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ Content-Type: text/plain; charset="utf-8" From: Zhang Yi Change ext4_alloc_file_blocks() to accept offset and len in byte granularity instead of block granularity. This allows callers to pass byte offsets and lengths directly, and this prepares for moving the ext4_zero_partial_blocks() call from the while(len) loop for unaligned append writes, where it only needs to be invoked once before doing block allocation. Signed-off-by: Zhang Yi Reviewed-by: Jan Kara --- fs/ext4/extents.c | 53 ++++++++++++++++++++--------------------------- 1 file changed, 22 insertions(+), 31 deletions(-) diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c index 753a0f3418a4..57a686b600d9 100644 --- a/fs/ext4/extents.c +++ b/fs/ext4/extents.c @@ -4542,15 +4542,15 @@ int ext4_ext_truncate(handle_t *handle, struct inod= e *inode) return err; } =20 -static int ext4_alloc_file_blocks(struct file *file, ext4_lblk_t offset, - ext4_lblk_t len, loff_t new_size, - int flags) +static int ext4_alloc_file_blocks(struct file *file, loff_t offset, loff_t= len, + loff_t new_size, int flags) { struct inode *inode =3D file_inode(file); handle_t *handle; int ret =3D 0, ret2 =3D 0, ret3 =3D 0; int retries =3D 0; int depth =3D 0; + ext4_lblk_t len_lblk; struct ext4_map_blocks map; unsigned int credits; loff_t epos, old_size =3D i_size_read(inode); @@ -4558,14 +4558,14 @@ static int ext4_alloc_file_blocks(struct file *file= , ext4_lblk_t offset, bool alloc_zero =3D false; =20 BUG_ON(!ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)); - map.m_lblk =3D offset; - map.m_len =3D len; + map.m_lblk =3D offset >> blkbits; + map.m_len =3D len_lblk =3D EXT4_MAX_BLOCKS(len, offset, blkbits); /* * Don't normalize the request if it can fit in one extent so * that it doesn't get unnecessarily split into multiple * extents. */ - if (len <=3D EXT_UNWRITTEN_MAX_LEN) + if (len_lblk <=3D EXT_UNWRITTEN_MAX_LEN) flags |=3D EXT4_GET_BLOCKS_NO_NORMALIZE; =20 /* @@ -4582,16 +4582,16 @@ static int ext4_alloc_file_blocks(struct file *file= , ext4_lblk_t offset, /* * credits to insert 1 extent into extent tree */ - credits =3D ext4_chunk_trans_blocks(inode, len); + credits =3D ext4_chunk_trans_blocks(inode, len_lblk); depth =3D ext_depth(inode); =20 retry: - while (len) { + while (len_lblk) { /* * Recalculate credits when extent tree depth changes. */ if (depth !=3D ext_depth(inode)) { - credits =3D ext4_chunk_trans_blocks(inode, len); + credits =3D ext4_chunk_trans_blocks(inode, len_lblk); depth =3D ext_depth(inode); } =20 @@ -4648,7 +4648,7 @@ static int ext4_alloc_file_blocks(struct file *file, = ext4_lblk_t offset, } =20 map.m_lblk +=3D ret; - map.m_len =3D len =3D len - ret; + map.m_len =3D len_lblk =3D len_lblk - ret; } if (ret =3D=3D -ENOSPC && ext4_should_retry_alloc(inode->i_sb, &retries)) goto retry; @@ -4665,11 +4665,9 @@ static long ext4_zero_range(struct file *file, loff_= t offset, { struct inode *inode =3D file_inode(file); handle_t *handle =3D NULL; - loff_t new_size =3D 0; + loff_t align_start, align_end, new_size =3D 0; loff_t end =3D offset + len; - ext4_lblk_t start_lblk, end_lblk; unsigned int blocksize =3D i_blocksize(inode); - unsigned int blkbits =3D inode->i_blkbits; int ret, flags, credits; =20 trace_ext4_zero_range(inode, offset, len, mode); @@ -4690,11 +4688,8 @@ static long ext4_zero_range(struct file *file, loff_= t offset, flags =3D EXT4_GET_BLOCKS_CREATE_UNWRIT_EXT; /* Preallocate the range including the unaligned edges */ if (!IS_ALIGNED(offset | end, blocksize)) { - ext4_lblk_t alloc_lblk =3D offset >> blkbits; - ext4_lblk_t len_lblk =3D EXT4_MAX_BLOCKS(len, offset, blkbits); - - ret =3D ext4_alloc_file_blocks(file, alloc_lblk, len_lblk, - new_size, flags); + ret =3D ext4_alloc_file_blocks(file, offset, len, new_size, + flags); if (ret) return ret; } @@ -4709,18 +4704,17 @@ static long ext4_zero_range(struct file *file, loff= _t offset, return ret; =20 /* Zero range excluding the unaligned edges */ - start_lblk =3D EXT4_B_TO_LBLK(inode, offset); - end_lblk =3D end >> blkbits; - if (end_lblk > start_lblk) { - ext4_lblk_t zero_blks =3D end_lblk - start_lblk; - + align_start =3D round_up(offset, blocksize); + align_end =3D round_down(end, blocksize); + if (align_end > align_start) { if (mode & FALLOC_FL_WRITE_ZEROES) flags =3D EXT4_GET_BLOCKS_CREATE_ZERO | EXT4_EX_NOCACHE; else flags |=3D (EXT4_GET_BLOCKS_CONVERT_UNWRITTEN | EXT4_EX_NOCACHE); - ret =3D ext4_alloc_file_blocks(file, start_lblk, zero_blks, - new_size, flags); + ret =3D ext4_alloc_file_blocks(file, align_start, + align_end - align_start, new_size, + flags); if (ret) return ret; } @@ -4768,15 +4762,11 @@ static long ext4_do_fallocate(struct file *file, lo= ff_t offset, struct inode *inode =3D file_inode(file); loff_t end =3D offset + len; loff_t new_size =3D 0; - ext4_lblk_t start_lblk, len_lblk; int ret; =20 trace_ext4_fallocate_enter(inode, offset, len, mode); WARN_ON_ONCE(!inode_is_locked(inode)); =20 - start_lblk =3D offset >> inode->i_blkbits; - len_lblk =3D EXT4_MAX_BLOCKS(len, offset, inode->i_blkbits); - /* We only support preallocation for extent-based files only. */ if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))) { ret =3D -EOPNOTSUPP; @@ -4791,7 +4781,7 @@ static long ext4_do_fallocate(struct file *file, loff= _t offset, goto out; } =20 - ret =3D ext4_alloc_file_blocks(file, start_lblk, len_lblk, new_size, + ret =3D ext4_alloc_file_blocks(file, offset, len, new_size, EXT4_GET_BLOCKS_CREATE_UNWRIT_EXT); if (ret) goto out; @@ -4801,7 +4791,8 @@ static long ext4_do_fallocate(struct file *file, loff= _t offset, EXT4_I(inode)->i_sync_tid); } out: - trace_ext4_fallocate_exit(inode, offset, len_lblk, ret); + trace_ext4_fallocate_exit(inode, offset, + EXT4_MAX_BLOCKS(len, offset, inode->i_blkbits), ret); return ret; } =20 --=20 2.52.0 From nobody Fri Apr 3 03:01:32 2026 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E4F6D374E76; Wed, 25 Mar 2026 07:33:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774424033; cv=none; b=IBXpSMhHXMAbNFcE1kKflketzE68U+iI4GgZxnC8Ba0gzXIT4GRd5iPhLP0yKNOcLs5x6UmiOf2XBxZ/2o8ovvdooyK2c2a9YJ4/uVS3EJH+Udism3aW9M9GTA03URHptzhPInQdyzjJd/H2mSIFxY6HX+cLc0JXo5P/T/DGt8Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774424033; c=relaxed/simple; bh=wqfNvHslotxbWLqwkMSvQMGcX6E44362PIxu1wGqjTI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=AOu5QRLVcQevLjuUI4DuBd50amx1EmZ5Hy2CteGAh7RroEeXNPuq9zobbA/kunuByDdn51UD5k4zj2sxf4UkKtl9J+68SQRG0KGS7Bp7skTffz37KtIoQ4k35SvRM2rH4l/2Th26vE5HRSsHGRqaj+jhOnLK7YPBRRVkimIUdXQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=none smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.198]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTPS id 4fgdt541KdzKHMZf; Wed, 25 Mar 2026 15:33:01 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.112]) by mail.maildlp.com (Postfix) with ESMTP id E107440575; Wed, 25 Mar 2026 15:33:39 +0800 (CST) Received: from huaweicloud.com (unknown [10.50.85.155]) by APP1 (Coremail) with SMTP id cCh0CgAHC9vFj8NpuR6cCA--.49898S13; Wed, 25 Mar 2026 15:33:39 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ojaswin@linux.ibm.com, ritesh.list@gmail.com, libaokun@linux.alibaba.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, yizhang089@gmail.com, yangerkun@huawei.com, yukuai@fnnas.com Subject: [PATCH v2 09/10] ext4: move zero partial block range functions out of active handle Date: Wed, 25 Mar 2026 15:28:48 +0800 Message-ID: <20260325072850.3997161-10-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260325072850.3997161-1-yi.zhang@huaweicloud.com> References: <20260325072850.3997161-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: cCh0CgAHC9vFj8NpuR6cCA--.49898S13 X-Coremail-Antispam: 1UD129KBjvJXoWxuFW3XFykCFyxCw13Xry5Jwb_yoW7Kr4Dp3 y5Ja4fGr1kWF909F4IkF47ZF4Yk3WxGr4UGrWxCryFqa4DZw1SkF1aya40vFWUKrWUur4Y vF4jkryUG3WUC3DanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUmS14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JF0E3s1l82xGYI kIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2 z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F 4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq 3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7 IYx2IY67AKxVWUGVWUXwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4U M4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwACI402YVCY1x02628vn2 kIc2xKxwCY1x0262kKe7AKxVWUtVW8ZwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkE bVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67 AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUCVW8JwCI 42IY6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F4UJwCI42IY6xAIw20EY4v20xvaj40_Jr0_JF 4lIxAIcVC2z280aVAFwI0_Jr0_Gr1lIxAIcVC2z280aVCY1x0267AKxVW8Jr0_Cr1UYxBI daVFxhVjvjDU0xZFpf9x0JUWMKtUUUUU= X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ Content-Type: text/plain; charset="utf-8" From: Zhang Yi Move ext4_block_zero_eof() and ext4_zero_partial_blocks() calls out of the active handle context, making them independent operations. This is safe because it still ensures data is updated before metadata for data=3Dordered mode and data=3Djournal mode because we still zero data and ordering data before modifying the metadata. This change is required for iomap infrastructure conversion because the iomap buffered I/O path does not use the same journal infrastructure for partial block zeroing. The lock ordering of folio lock and starting transactions is "folio lock -> transaction start", which is opposite of the current path. Therefore, zeroing partial blocks cannot be performed under the active handle. Signed-off-by: Zhang Yi Reviewed-by: Jan Kara --- fs/ext4/extents.c | 29 ++++++++++++----------------- fs/ext4/inode.c | 36 ++++++++++++++++++------------------ 2 files changed, 30 insertions(+), 35 deletions(-) diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c index 57a686b600d9..81b9d5b4ad71 100644 --- a/fs/ext4/extents.c +++ b/fs/ext4/extents.c @@ -4585,6 +4585,10 @@ static int ext4_alloc_file_blocks(struct file *file,= loff_t offset, loff_t len, credits =3D ext4_chunk_trans_blocks(inode, len_lblk); depth =3D ext_depth(inode); =20 + /* Zero to the end of the block containing i_size */ + if (new_size && offset > old_size) + ext4_block_zero_eof(inode, old_size, LLONG_MAX); + retry: while (len_lblk) { /* @@ -4623,10 +4627,8 @@ static int ext4_alloc_file_blocks(struct file *file,= loff_t offset, loff_t len, if (ext4_update_inode_size(inode, epos) & 0x1) inode_set_mtime_to_ts(inode, inode_get_ctime(inode)); - if (epos > old_size) { + if (epos > old_size) pagecache_isize_extended(inode, old_size, epos); - ext4_block_zero_eof(inode, old_size, epos); - } } ret2 =3D ext4_mark_inode_dirty(handle, inode); ext4_update_inode_fsync_trans(handle, inode, 1); @@ -4668,7 +4670,7 @@ static long ext4_zero_range(struct file *file, loff_t= offset, loff_t align_start, align_end, new_size =3D 0; loff_t end =3D offset + len; unsigned int blocksize =3D i_blocksize(inode); - int ret, flags, credits; + int ret, flags; =20 trace_ext4_zero_range(inode, offset, len, mode); WARN_ON_ONCE(!inode_is_locked(inode)); @@ -4722,25 +4724,18 @@ static long ext4_zero_range(struct file *file, loff= _t offset, if (IS_ALIGNED(offset | end, blocksize)) return ret; =20 - /* - * In worst case we have to writeout two nonadjacent unwritten - * blocks and update the inode - */ - credits =3D (2 * ext4_ext_index_trans_blocks(inode, 2)) + 1; - if (ext4_should_journal_data(inode)) - credits +=3D 2; - handle =3D ext4_journal_start(inode, EXT4_HT_MISC, credits); + /* Zero out partial block at the edges of the range */ + ret =3D ext4_zero_partial_blocks(inode, offset, len); + if (ret) + return ret; + + handle =3D ext4_journal_start(inode, EXT4_HT_MISC, 1); if (IS_ERR(handle)) { ret =3D PTR_ERR(handle); ext4_std_error(inode->i_sb, ret); return ret; } =20 - /* Zero out partial block at the edges of the range */ - ret =3D ext4_zero_partial_blocks(inode, offset, len); - if (ret) - goto out_handle; - if (new_size) ext4_update_inode_size(inode, new_size); ret =3D ext4_mark_inode_dirty(handle, inode); diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index f68b2afdcfcb..530197a53208 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -4442,8 +4442,12 @@ int ext4_punch_hole(struct file *file, loff_t offset= , loff_t length) if (ret) return ret; =20 + ret =3D ext4_zero_partial_blocks(inode, offset, length); + if (ret) + return ret; + if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)) - credits =3D ext4_chunk_trans_extent(inode, 2); + credits =3D ext4_chunk_trans_extent(inode, 0); else credits =3D ext4_blocks_for_truncate(inode); handle =3D ext4_journal_start(inode, EXT4_HT_TRUNCATE, credits); @@ -4453,10 +4457,6 @@ int ext4_punch_hole(struct file *file, loff_t offset= , loff_t length) return ret; } =20 - ret =3D ext4_zero_partial_blocks(inode, offset, length); - if (ret) - goto out_handle; - /* If there are blocks to remove, do it */ start_lblk =3D EXT4_B_TO_LBLK(inode, offset); end_lblk =3D end >> inode->i_blkbits; @@ -4588,6 +4588,9 @@ int ext4_truncate(struct inode *inode) err =3D ext4_inode_attach_jinode(inode); if (err) goto out_trace; + + /* Zero to the end of the block containing i_size */ + ext4_block_zero_eof(inode, inode->i_size, LLONG_MAX); } =20 if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)) @@ -4601,10 +4604,6 @@ int ext4_truncate(struct inode *inode) goto out_trace; } =20 - /* Zero to the end of the block containing i_size */ - if (inode->i_size & (inode->i_sb->s_blocksize - 1)) - ext4_block_zero_eof(inode, inode->i_size, LLONG_MAX); - /* * We add the inode to the orphan list, so that if this * truncate spans multiple transactions, and we crash, we will @@ -5962,15 +5961,6 @@ int ext4_setattr(struct mnt_idmap *idmap, struct den= try *dentry, goto out_mmap_sem; } =20 - handle =3D ext4_journal_start(inode, EXT4_HT_INODE, 3); - if (IS_ERR(handle)) { - error =3D PTR_ERR(handle); - goto out_mmap_sem; - } - if (ext4_handle_valid(handle) && shrink) { - error =3D ext4_orphan_add(handle, inode); - orphan =3D 1; - } /* * Update c/mtime and tail zero the EOF folio on * truncate up. ext4_truncate() handles the shrink case @@ -5984,6 +5974,16 @@ int ext4_setattr(struct mnt_idmap *idmap, struct den= try *dentry, LLONG_MAX); } =20 + handle =3D ext4_journal_start(inode, EXT4_HT_INODE, 3); + if (IS_ERR(handle)) { + error =3D PTR_ERR(handle); + goto out_mmap_sem; + } + if (ext4_handle_valid(handle) && shrink) { + error =3D ext4_orphan_add(handle, inode); + orphan =3D 1; + } + if (shrink) ext4_fc_track_range(handle, inode, (attr->ia_size > 0 ? attr->ia_size - 1 : 0) >> --=20 2.52.0 From nobody Fri Apr 3 03:01:32 2026 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E4E92374E6C; Wed, 25 Mar 2026 07:33:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774424034; cv=none; b=P2eYOMB33uXI3z9Mto7PoUDRje4aR9vw+yIG6NbrXwvTzADtY30yImGgH3UPBh5Wu2pjqSHrGfhNHJggG0JgzcRez0xUcinXlz67kLuKw6m4DsW6Y7hZ+cCwk46e1vlVu+JtaXUQSlrOay0/zaFHWh9gWAk5BhHzPKgvJyNv6jM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774424034; c=relaxed/simple; bh=T5rWW1i7HG6tg/wuxYxYjvwLy+YvzaEM/Du4AE7QwB8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=CpxCe2CKezG3nc/mpGcuL6IYIJtFsc890C8BQ3EXbDyglHJ6kk6Cuo97psayXRjXUSB0sJdCNxnlZe5o8jVf3sfpScqzvE92yzPTFaktDKIGzbaP7KTZQJwlune8T7VBTRYnyXk39mXyBHKNrl+GrHJSiwCKq8Q/nkfQw3ZBP3k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.198]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTPS id 4fgdt54W5lzKHMZw; Wed, 25 Mar 2026 15:33:01 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.112]) by mail.maildlp.com (Postfix) with ESMTP id F32EC40539; Wed, 25 Mar 2026 15:33:39 +0800 (CST) Received: from huaweicloud.com (unknown [10.50.85.155]) by APP1 (Coremail) with SMTP id cCh0CgAHC9vFj8NpuR6cCA--.49898S14; Wed, 25 Mar 2026 15:33:39 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ojaswin@linux.ibm.com, ritesh.list@gmail.com, libaokun@linux.alibaba.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, yizhang089@gmail.com, yangerkun@huawei.com, yukuai@fnnas.com Subject: [PATCH v2 10/10] ext4: zero post-EOF partial block before appending write Date: Wed, 25 Mar 2026 15:28:49 +0800 Message-ID: <20260325072850.3997161-11-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260325072850.3997161-1-yi.zhang@huaweicloud.com> References: <20260325072850.3997161-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: cCh0CgAHC9vFj8NpuR6cCA--.49898S14 X-Coremail-Antispam: 1UD129KBjvJXoWxGrW8WrWrtryxZr17AF17GFg_yoWrWryDpF ZIkF1fuwnFgr9rWrWxKFsrZ34jka48JrW7GFyfKrWrZFnxJw18KF12q34UtFW5trZrXa1F qF4qkFy8G3Wjy3DanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUmS14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JF0E3s1l82xGYI kIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2 z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F 4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq 3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7 IYx2IY67AKxVWUGVWUXwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4U M4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwACI402YVCY1x02628vn2 kIc2xKxwCY1x0262kKe7AKxVWUtVW8ZwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkE bVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67 AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVW8JVW5JwCI 42IY6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F4UJwCI42IY6xAIw20EY4v20xvaj40_Jr0_JF 4lIxAIcVC2z280aVAFwI0_Gr0_Cr1lIxAIcVC2z280aVCY1x0267AKxVW8Jr0_Cr1UYxBI daVFxhVjvjDU0xZFpf9x0JUWMKtUUUUU= X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ Content-Type: text/plain; charset="utf-8" From: Zhang Yi In cases of appending write beyond EOF, ext4_zero_partial_blocks() is called within ext4_*_write_end() to zero out the partial block beyond EOF. This prevents exposing stale data that might be written through mmap. However, supporting only the regular buffered write path is insufficient. It is also necessary to support the DAX path as well as the upcoming iomap buffered write path. Therefore, move this operation to ext4_write_checks(). In addition, this may introduce a race window in which a post-EOF buffered write can race with an mmap write after the old EOF block has been zeroed. As a result, the data in this block written by the buffer-write and the data written by the mmap-write may be mixed. However, this is safe because users should not rely on the result of the race condition. Signed-off-by: Zhang Yi Reviewed-by: Jan Kara --- fs/ext4/file.c | 14 ++++++++++++++ fs/ext4/inode.c | 21 +++++++-------------- 2 files changed, 21 insertions(+), 14 deletions(-) diff --git a/fs/ext4/file.c b/fs/ext4/file.c index f1dc5ce791a7..b2e44601ab6a 100644 --- a/fs/ext4/file.c +++ b/fs/ext4/file.c @@ -271,6 +271,8 @@ static ssize_t ext4_generic_write_checks(struct kiocb *= iocb, =20 static ssize_t ext4_write_checks(struct kiocb *iocb, struct iov_iter *from) { + struct inode *inode =3D file_inode(iocb->ki_filp); + loff_t old_size =3D i_size_read(inode); ssize_t ret, count; =20 count =3D ext4_generic_write_checks(iocb, from); @@ -280,6 +282,18 @@ static ssize_t ext4_write_checks(struct kiocb *iocb, s= truct iov_iter *from) ret =3D file_modified(iocb->ki_filp); if (ret) return ret; + + /* + * If the position is beyond the EOF, it is necessary to zero out the + * partial block that beyond the existing EOF, as it may contains + * stale data written through mmap. + */ + if (iocb->ki_pos > old_size && !ext4_verity_in_progress(inode)) { + ret =3D ext4_block_zero_eof(inode, old_size, iocb->ki_pos); + if (ret < 0) + return ret; + } + return count; } =20 diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 530197a53208..1479ff56f7d0 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -1456,10 +1456,9 @@ static int ext4_write_end(const struct kiocb *iocb, folio_unlock(folio); folio_put(folio); =20 - if (old_size < pos && !verity) { + if (old_size < pos && !verity) pagecache_isize_extended(inode, old_size, pos); - ext4_block_zero_eof(inode, old_size, pos); - } + /* * Don't mark the inode dirty under folio lock. First, it unnecessarily * makes the holding time of folio lock longer. Second, it forces lock @@ -1574,10 +1573,8 @@ static int ext4_journalled_write_end(const struct ki= ocb *iocb, folio_unlock(folio); folio_put(folio); =20 - if (old_size < pos && !verity) { + if (old_size < pos && !verity) pagecache_isize_extended(inode, old_size, pos); - ext4_block_zero_eof(inode, old_size, pos); - } =20 if (size_changed) { ret2 =3D ext4_mark_inode_dirty(handle, inode); @@ -3196,7 +3193,7 @@ static int ext4_da_do_write_end(struct address_space = *mapping, struct inode *inode =3D mapping->host; loff_t old_size =3D inode->i_size; bool disksize_changed =3D false; - loff_t new_i_size, zero_len =3D 0; + loff_t new_i_size; handle_t *handle; =20 if (unlikely(!folio_buffers(folio))) { @@ -3240,19 +3237,15 @@ static int ext4_da_do_write_end(struct address_spac= e *mapping, folio_unlock(folio); folio_put(folio); =20 - if (pos > old_size) { + if (pos > old_size) pagecache_isize_extended(inode, old_size, pos); - zero_len =3D pos - old_size; - } =20 - if (!disksize_changed && !zero_len) + if (!disksize_changed) return copied; =20 - handle =3D ext4_journal_start(inode, EXT4_HT_INODE, 2); + handle =3D ext4_journal_start(inode, EXT4_HT_INODE, 1); if (IS_ERR(handle)) return PTR_ERR(handle); - if (zero_len) - ext4_block_zero_eof(inode, old_size, pos); ext4_mark_inode_dirty(handle, inode); ext4_journal_stop(handle); =20 --=20 2.52.0