From nobody Mon Feb 9 10:12:47 2026 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5AFEA23C4FF; Tue, 3 Feb 2026 06:30:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770100221; cv=none; b=FLAkL3zK/VWlOd8Q17amsm2C99kZSv6/n7Syg74N5r0Un7qC0fVmgHMxE/nSggqoN+Fw0KjPsOpIPINPQgHVKfJOalBM5grj1+fYFjBsvAsSWTIT07E11LzDX2tD8CPck8r3hBX3gNiWOyO1cqZcchL4tdUB0JianJlYmeek/qk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770100221; c=relaxed/simple; bh=S6zZLZA6qQmW24CKxJl46apECpy26rNSYksWDEwOGMc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PTuJGMG3JZ530qsWJr73hVE3G1zde//E7SCwppEl1lJxdvbiMNDScY4i9NZq/HDcXZ8eZ0n5GDwhiYEYjmca0CgkvEvPMa60vsn0rEnwlxea1f5Ua+d3kRLKIzJqC/o35uTfvduDtqOuZCK7o1IubKiyvLz7rYLwzMvtVxvFTl0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.177]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTPS id 4f4tqq36pZzYQty9; Tue, 3 Feb 2026 14:29:27 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 5972C40590; Tue, 3 Feb 2026 14:30:14 +0800 (CST) Received: from huaweicloud.com (unknown [10.50.85.155]) by APP4 (Coremail) with SMTP id gCh0CgAHaPjnlYFpiadbGA--.27803S12; Tue, 03 Feb 2026 14:30:14 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ojaswin@linux.ibm.com, ritesh.list@gmail.com, hch@infradead.org, djwong@kernel.org, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, yizhang089@gmail.com, libaokun1@huawei.com, yangerkun@huawei.com, yukuai@fnnas.com Subject: [PATCH -next v2 08/22] ext4: zero post EOF partial block before appending write Date: Tue, 3 Feb 2026 14:25:08 +0800 Message-ID: <20260203062523.3869120-9-yi.zhang@huawei.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260203062523.3869120-1-yi.zhang@huawei.com> References: <20260203062523.3869120-1-yi.zhang@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: gCh0CgAHaPjnlYFpiadbGA--.27803S12 X-Coremail-Antispam: 1UD129KBjvJXoWxGrW8WrWrtryxCFWkCr1fXrb_yoWrWr45pF ZIkF1fuw1Igr9rWrWfWFs8Z34Ykas5JrW7GFyfKrWrZFnxZw18KF12qa4YkFW5trZrXw4F qF4qgFy8G3WUC3DanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUHqb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUAV Cq3wA2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0 rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267 AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E 14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7 xfMcIj6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Y z7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lF7Iv64x0x7Aq67IIx4CEVc8vx2IErcIFxwACI4 02YVCY1x02628vn2kIc2xKxwCY1x0262kKe7AKxVWUtVW8ZwCF04k20xvY0x0EwIxGrwCF 04k20xvEw4C26cxK6c8Ij28IcwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14 v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_GFv_WrylIxkG c2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUCVW8JwCI42IY6xIIjxv20xvEc7CjxVAFwI 0_Gr1j6F4UJwCI42IY6xAIw20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Jr0_ Gr1lIxAIcVC2z280aVCY1x0267AKxVW8Jr0_Cr1UYxBIdaVFxhVjvjDU0xZFpf9x07UZyC LUUUUU= Sender: yi.zhang@huaweicloud.com X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ Content-Type: text/plain; charset="utf-8" In cases of appending write beyond the end of file (EOF), ext4_zero_partial_blocks() is called within ext4_*_write_end() to zero out the partial block beyond the EOF. This prevents exposing stale data that might be written through mmap. However, supporting only the regular buffered write path is insufficient. It is also necessary to support the DAX path as well as the upcoming iomap buffered write path. Therefore, move this operation to ext4_write_checks(). In addition, the zero length is limited within the EOF block to prevent ext4_zero_partial_blocks() from attempting to zero out the extra end block (although it would not do anything anyway). Signed-off-by: Zhang Yi --- fs/ext4/file.c | 20 ++++++++++++++++++++ fs/ext4/inode.c | 21 +++++++-------------- 2 files changed, 27 insertions(+), 14 deletions(-) diff --git a/fs/ext4/file.c b/fs/ext4/file.c index 4320ebff74f3..3ecc09f286e4 100644 --- a/fs/ext4/file.c +++ b/fs/ext4/file.c @@ -271,6 +271,9 @@ static ssize_t ext4_generic_write_checks(struct kiocb *= iocb, =20 static ssize_t ext4_write_checks(struct kiocb *iocb, struct iov_iter *from) { + struct inode *inode =3D file_inode(iocb->ki_filp); + unsigned int blocksize =3D i_blocksize(inode); + loff_t old_size =3D i_size_read(inode); ssize_t ret, count; =20 count =3D ext4_generic_write_checks(iocb, from); @@ -280,6 +283,23 @@ static ssize_t ext4_write_checks(struct kiocb *iocb, s= truct iov_iter *from) ret =3D file_modified(iocb->ki_filp); if (ret) return ret; + + /* + * If the position is beyond the EOF, it is necessary to zero out the + * partial block that beyond the existing EOF, as it may contains + * stale data written through mmap. + */ + if (iocb->ki_pos > old_size && (old_size & (blocksize - 1))) { + loff_t end =3D round_up(old_size, blocksize); + + if (iocb->ki_pos < end) + end =3D iocb->ki_pos; + + ret =3D ext4_zero_partial_blocks(inode, old_size, end - old_size); + if (ret) + return ret; + } + return count; } =20 diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 9c0e70256527..1ac93c39d21e 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -1456,10 +1456,9 @@ static int ext4_write_end(const struct kiocb *iocb, folio_unlock(folio); folio_put(folio); =20 - if (old_size < pos && !verity) { + if (old_size < pos && !verity) pagecache_isize_extended(inode, old_size, pos); - ext4_zero_partial_blocks(inode, old_size, pos - old_size); - } + /* * Don't mark the inode dirty under folio lock. First, it unnecessarily * makes the holding time of folio lock longer. Second, it forces lock @@ -1574,10 +1573,8 @@ static int ext4_journalled_write_end(const struct ki= ocb *iocb, folio_unlock(folio); folio_put(folio); =20 - if (old_size < pos && !verity) { + if (old_size < pos && !verity) pagecache_isize_extended(inode, old_size, pos); - ext4_zero_partial_blocks(inode, old_size, pos - old_size); - } =20 if (size_changed) { ret2 =3D ext4_mark_inode_dirty(handle, inode); @@ -3196,7 +3193,7 @@ static int ext4_da_do_write_end(struct address_space = *mapping, struct inode *inode =3D mapping->host; loff_t old_size =3D inode->i_size; bool disksize_changed =3D false; - loff_t new_i_size, zero_len =3D 0; + loff_t new_i_size; handle_t *handle; =20 if (unlikely(!folio_buffers(folio))) { @@ -3240,19 +3237,15 @@ static int ext4_da_do_write_end(struct address_spac= e *mapping, folio_unlock(folio); folio_put(folio); =20 - if (pos > old_size) { + if (pos > old_size) pagecache_isize_extended(inode, old_size, pos); - zero_len =3D pos - old_size; - } =20 - if (!disksize_changed && !zero_len) + if (!disksize_changed) return copied; =20 - handle =3D ext4_journal_start(inode, EXT4_HT_INODE, 2); + handle =3D ext4_journal_start(inode, EXT4_HT_INODE, 1); if (IS_ERR(handle)) return PTR_ERR(handle); - if (zero_len) - ext4_zero_partial_blocks(inode, old_size, zero_len); ext4_mark_inode_dirty(handle, inode); ext4_journal_stop(handle); =20 --=20 2.52.0