From nobody Mon Feb 9 15:25:47 2026 Received: from fra-out-006.esa.eu-central-1.outbound.mail-perimeter.amazon.com (fra-out-006.esa.eu-central-1.outbound.mail-perimeter.amazon.com [18.197.217.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 83DFD275861; Tue, 21 Oct 2025 07:12:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=18.197.217.180 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761030761; cv=none; b=r/NoW9SqFbRAThs5SqH8ZslmarX77+B0NaCS6zgdH6L31sR9UXRwDrH0b94cPeF34xBfTcwTt7r46qnStpObTpCCpbgxvIqkhhrhlFQoSYrZaEe0F5XSIoElpRiVIkZ54+x3pFudUma3oyJSYQcxcYXmUcxmUEzfSU/L2AJJn+w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761030761; c=relaxed/simple; bh=XoZh3t7IeR1HA8mtNWmkCL17O8IJQtXuDyY4LMiV7SU=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=OlWH7yqxT7gIfhurbo23HO/DvyNJcTmMeLyeINXHxft7pWQMeL52IAevWMmzqKQo0pVkOW9H1jiC9ASdqbI2nECGZv3y/S6jvMdT86EgNoYgWAqVmZrKMu3Bs47rxX6qfntDsu+hpx82ZNMeXo2oq3VLGtltQwNKUvaZ4DEorpM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.de; spf=pass smtp.mailfrom=amazon.de; dkim=pass (2048-bit key) header.d=amazon.de header.i=@amazon.de header.b=IBUDDyad; arc=none smtp.client-ip=18.197.217.180 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=amazon.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=amazon.de header.i=@amazon.de header.b="IBUDDyad" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.de; i=@amazon.de; q=dns/txt; s=amazoncorp2; t=1761030758; x=1792566758; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=XbsklnLadu0TimGCGccN0DzgzObfA2bbC0YKqZmM7XE=; b=IBUDDyadU5YDvy7zdqApSnUXb/LnL6tbfCs53RpphhKqSMRZU46DrNRn EFpEQrS5BLxfpGh81aDydswMz0x3A1nCJ+uDAzHeEkwXfxfrbezSxpsDZ Gt4CcbPuSgUpkIThSmgT4mZtezXFc5evCYNhZEjpVG+t5ZOuDwIdAFH1J ygcSByJ8IfBZFGo3EZfIcNrntP/Zwjv3SxxOpzguhgkCNuBQAYj/vdv9I j7Xpw5exazrDtnNjH4id4YFfiQ1nHhWFSz8OJVEpKjXDb+eU2MUDA7Qk3 p1Om2w9EP0smGC9CU11xfYWjHQbmHIGhlivLvnpsxc1pg3NKiNm5zryPY w==; X-CSE-ConnectionGUID: eoJi007nQRij3hX+bGiZxw== X-CSE-MsgGUID: vEwiZ/SEQsSkyB5yOtxRcg== X-IronPort-AV: E=Sophos;i="6.19,244,1754956800"; d="scan'208";a="3930984" Received: from ip-10-6-6-97.eu-central-1.compute.internal (HELO smtpout.naws.eu-central-1.prod.farcaster.email.amazon.dev) ([10.6.6.97]) by internal-fra-out-006.esa.eu-central-1.outbound.mail-perimeter.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Oct 2025 07:12:27 +0000 Received: from EX19MTAEUB001.ant.amazon.com [54.240.197.234:21179] by smtpin.naws.eu-central-1.prod.farcaster.email.amazon.dev [10.0.15.203:2525] with esmtp (Farcaster) id 3389bf29-a6fa-4def-bde8-26368f7ce873; Tue, 21 Oct 2025 07:12:27 +0000 (UTC) X-Farcaster-Flow-ID: 3389bf29-a6fa-4def-bde8-26368f7ce873 Received: from EX19D013EUB004.ant.amazon.com (10.252.51.92) by EX19MTAEUB001.ant.amazon.com (10.252.51.28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.2562.20; Tue, 21 Oct 2025 07:12:26 +0000 Received: from dev-dsk-mngyadam-1c-cb3f7548.eu-west-1.amazon.com (10.253.107.175) by EX19D013EUB004.ant.amazon.com (10.252.51.92) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.2562.20; Tue, 21 Oct 2025 07:12:17 +0000 From: Mahmoud Adam To: CC: , , "Darrick J. Wong" , Christoph Hellwig , Luis Chamberlain , Shin'ichiro Kawasaki , "Jens Axboe" , Xiubo Li , Ilya Dryomov , Jeff Layton , Alexander Viro , Theodore Ts'o , Andreas Dilger , Jaegeuk Kim , Chao Yu , Christoph Hellwig , Trond Myklebust , Anna Schumaker , "Ryusuke Konishi" , "Matthew Wilcox (Oracle)" , Andrew Morton , "Hannes Reinecke" , Damien Le Moal , , , , , , , , , , Subject: [PATCH 6.1 7/8] block: fix race between set_blocksize and read paths Date: Tue, 21 Oct 2025 09:03:42 +0200 Message-ID: <20251021070353.96705-9-mngyadam@amazon.de> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20251021070353.96705-2-mngyadam@amazon.de> References: <20251021070353.96705-2-mngyadam@amazon.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: EX19D042UWB003.ant.amazon.com (10.13.139.135) To EX19D013EUB004.ant.amazon.com (10.252.51.92) Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: "Darrick J. Wong" commit c0e473a0d226479e8e925d5ba93f751d8df628e9 upstream. With the new large sector size support, it's now the case that set_blocksize can change i_blksize and the folio order in a manner that conflicts with a concurrent reader and causes a kernel crash. Specifically, let's say that udev-worker calls libblkid to detect the labels on a block device. The read call can create an order-0 folio to read the first 4096 bytes from the disk. But then udev is preempted. Next, someone tries to mount an 8k-sectorsize filesystem from the same block device. The filesystem calls set_blksize, which sets i_blksize to 8192 and the minimum folio order to 1. Now udev resumes, still holding the order-0 folio it allocated. It then tries to schedule a read bio and do_mpage_readahead tries to create bufferheads for the folio. Unfortunately, blocks_per_folio =3D=3D 0 because the page size is 4096 but the blocksize is 8192 so no bufferheads are attached and the bh walk never sets bdev. We then submit the bio with a NULL block device and crash. Therefore, truncate the page cache after flushing but before updating i_blksize. However, that's not enough -- we also need to lock out file IO and page faults during the update. Take both the i_rwsem and the invalidate_lock in exclusive mode for invalidations, and in shared mode for read/write operations. I don't know if this is the correct fix, but xfs/259 found it. Signed-off-by: Darrick J. Wong Reviewed-by: Christoph Hellwig Reviewed-by: Luis Chamberlain Tested-by: Shin'ichiro Kawasaki Link: https://lore.kernel.org/r/174543795699.4139148.2086129139322431423.st= git@frogsfrogsfrogs Signed-off-by: Jens Axboe [use bdev->bd_inode instead & fix small contextual changes] Signed-off-by: Mahmoud Adam --- block/bdev.c | 17 +++++++++++++++++ block/blk-zoned.c | 5 ++++- block/fops.c | 16 ++++++++++++++++ block/ioctl.c | 6 ++++++ 4 files changed, 43 insertions(+), 1 deletion(-) diff --git a/block/bdev.c b/block/bdev.c index b61502ec8da06c..5a631a0ca46a81 100644 --- a/block/bdev.c +++ b/block/bdev.c @@ -147,9 +147,26 @@ int set_blocksize(struct block_device *bdev, int size) =20 /* Don't change the size if it is same as current */ if (bdev->bd_inode->i_blkbits !=3D blksize_bits(size)) { + /* + * Flush and truncate the pagecache before we reconfigure the + * mapping geometry because folio sizes are variable now. If a + * reader has already allocated a folio whose size is smaller + * than the new min_order but invokes readahead after the new + * min_order becomes visible, readahead will think there are + * "zero" blocks per folio and crash. Take the inode and + * invalidation locks to avoid racing with + * read/write/fallocate. + */ + inode_lock(bdev->bd_inode); + filemap_invalidate_lock(bdev->bd_inode->i_mapping); + sync_blockdev(bdev); + kill_bdev(bdev); + bdev->bd_inode->i_blkbits =3D blksize_bits(size); kill_bdev(bdev); + filemap_invalidate_unlock(bdev->bd_inode->i_mapping); + inode_unlock(bdev->bd_inode); } return 0; } diff --git a/block/blk-zoned.c b/block/blk-zoned.c index db829401d8d0ca..ef72612ca4645f 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -417,6 +417,7 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, f= mode_t mode, op =3D REQ_OP_ZONE_RESET; =20 /* Invalidate the page cache, including dirty pages. */ + inode_lock(bdev->bd_inode); filemap_invalidate_lock(bdev->bd_inode->i_mapping); ret =3D blkdev_truncate_zone_range(bdev, mode, &zrange); if (ret) @@ -439,8 +440,10 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, = fmode_t mode, GFP_KERNEL); =20 fail: - if (cmd =3D=3D BLKRESETZONE) + if (cmd =3D=3D BLKRESETZONE) { filemap_invalidate_unlock(bdev->bd_inode->i_mapping); + inode_unlock(bdev->bd_inode); + } =20 return ret; } diff --git a/block/fops.c b/block/fops.c index fb7a57ed42d995..2fc6ac6679ee63 100644 --- a/block/fops.c +++ b/block/fops.c @@ -592,7 +592,14 @@ static ssize_t blkdev_write_iter(struct kiocb *iocb, s= truct iov_iter *from) ret =3D direct_write_fallback(iocb, from, ret, generic_perform_write(iocb, from)); } else { + /* + * Take i_rwsem and invalidate_lock to avoid racing with + * set_blocksize changing i_blkbits/folio order and punching + * out the pagecache. + */ + inode_lock_shared(bd_inode); ret =3D generic_perform_write(iocb, from); + inode_unlock_shared(bd_inode); } =20 if (ret > 0) @@ -605,6 +612,7 @@ static ssize_t blkdev_write_iter(struct kiocb *iocb, st= ruct iov_iter *from) static ssize_t blkdev_read_iter(struct kiocb *iocb, struct iov_iter *to) { struct block_device *bdev =3D iocb->ki_filp->private_data; + struct inode *bd_inode =3D bdev->bd_inode; loff_t size =3D bdev_nr_bytes(bdev); loff_t pos =3D iocb->ki_pos; size_t shorted =3D 0; @@ -652,7 +660,13 @@ static ssize_t blkdev_read_iter(struct kiocb *iocb, st= ruct iov_iter *to) goto reexpand; } =20 + /* + * Take i_rwsem and invalidate_lock to avoid racing with set_blocksize + * changing i_blkbits/folio order and punching out the pagecache. + */ + inode_lock_shared(bd_inode); ret =3D filemap_read(iocb, to, ret); + inode_unlock_shared(bd_inode); =20 reexpand: if (unlikely(shorted)) @@ -695,6 +709,7 @@ static long blkdev_fallocate(struct file *file, int mod= e, loff_t start, if ((start | len) & (bdev_logical_block_size(bdev) - 1)) return -EINVAL; =20 + inode_lock(inode); filemap_invalidate_lock(inode->i_mapping); =20 /* @@ -735,6 +750,7 @@ static long blkdev_fallocate(struct file *file, int mod= e, loff_t start, =20 fail: filemap_invalidate_unlock(inode->i_mapping); + inode_unlock(inode); return error; } =20 diff --git a/block/ioctl.c b/block/ioctl.c index 552da0ccbec09e..b37bbe9b3487d8 100644 --- a/block/ioctl.c +++ b/block/ioctl.c @@ -114,6 +114,7 @@ static int blk_ioctl_discard(struct block_device *bdev,= fmode_t mode, end > bdev_nr_bytes(bdev)) return -EINVAL; =20 + inode_lock(inode); filemap_invalidate_lock(inode->i_mapping); err =3D truncate_bdev_range(bdev, mode, start, end - 1); if (err) @@ -121,6 +122,7 @@ static int blk_ioctl_discard(struct block_device *bdev,= fmode_t mode, err =3D blkdev_issue_discard(bdev, start >> 9, len >> 9, GFP_KERNEL); fail: filemap_invalidate_unlock(inode->i_mapping); + inode_unlock(inode); return err; } =20 @@ -146,12 +148,14 @@ static int blk_ioctl_secure_erase(struct block_device= *bdev, fmode_t mode, end > bdev_nr_bytes(bdev)) return -EINVAL; =20 + inode_lock(bdev->bd_inode); filemap_invalidate_lock(bdev->bd_inode->i_mapping); err =3D truncate_bdev_range(bdev, mode, start, end - 1); if (!err) err =3D blkdev_issue_secure_erase(bdev, start >> 9, len >> 9, GFP_KERNEL); filemap_invalidate_unlock(bdev->bd_inode->i_mapping); + inode_unlock(bdev->bd_inode); return err; } =20 @@ -184,6 +188,7 @@ static int blk_ioctl_zeroout(struct block_device *bdev,= fmode_t mode, return -EINVAL; =20 /* Invalidate the page cache, including dirty pages */ + inode_lock(inode); filemap_invalidate_lock(inode->i_mapping); err =3D truncate_bdev_range(bdev, mode, start, end); if (err) @@ -194,6 +199,7 @@ static int blk_ioctl_zeroout(struct block_device *bdev,= fmode_t mode, =20 fail: filemap_invalidate_unlock(inode->i_mapping); + inode_unlock(inode); return err; } =20 --=20 2.47.3 Amazon Web Services Development Center Germany GmbH Tamara-Danz-Str. 13 10243 Berlin Geschaeftsfuehrung: Christian Schlaeger Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B Sitz: Berlin Ust-ID: DE 365 538 597