From nobody Thu Sep 19 16:49:07 2024 Received: from mout-p-101.mailbox.org (mout-p-101.mailbox.org [80.241.56.151]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 986C81CFEA2; Thu, 22 Aug 2024 13:51:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=80.241.56.151 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724334684; cv=none; b=LDbZhsFPMGqKe1uP8NwFrig2A9zQAewR+kFiNgKDKTbwRprpvWLUpy0h8++EICRJ3mR2LvMFUvkqn0YW1NFOZme2TUpaxeU9bGYolyqUfC6wfBxTY+XGGfA6RRyyEdSRz2UZaud/RU3uEUPvU69YRCAmPd9kiuxPER/4OL1iApY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724334684; c=relaxed/simple; bh=cKQSwvvpJzTxFVJVmCS7844uk2UcjOOJMnuNzHL3iHw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=lOUR1BX0XDP4DJwt3q+7LYwvQn4OuaBdKw8PMQpBZE23AUEZwzpIh6d+GuoBUQTFcb1K8wc4MU17YrYwJQUtsGpufPb+eBXWvzeGBL74UUNgVAv25EKR7aaeXgZwLfRv3A8RoQ0UM170LVCtxktEMSPFLG476FTbFnQW5n6/wxk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pankajraghav.com; spf=pass smtp.mailfrom=pankajraghav.com; dkim=pass (2048-bit key) header.d=pankajraghav.com header.i=@pankajraghav.com header.b=VegWij3r; arc=none smtp.client-ip=80.241.56.151 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pankajraghav.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=pankajraghav.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=pankajraghav.com header.i=@pankajraghav.com header.b="VegWij3r" Received: from smtp1.mailbox.org (smtp1.mailbox.org [IPv6:2001:67c:2050:b231:465::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-101.mailbox.org (Postfix) with ESMTPS id 4WqPkH1DMrz9swN; Thu, 22 Aug 2024 15:51:19 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1724334679; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uKuJPTxavc2u9bTAhiaeRY690QnbuCpo0c6Cxa2GBYk=; b=VegWij3rLaHj8dGmARrjjens56tR5/JYIcYcwfp+ooIquZOFoXJ2L0sBQHuJh7uOqWy2qT 9knWMKf8O5hBLnDbIMuM5RSRQeW4ESx7KNzXJDkfAgLizL/aamh7xaayXb5I+TKBqblHzf cIXsZyKICHnGWkeZJS/tFzwOuLTB0X9pxWcXmf3mfB6n+HHFLWiRsg5F6e2RzafYBNew5c 1S5SmBJj1rFEDVDs4aPfV8Pmn8XoOpy+l77RYQ4AcdRHyHWioUmEwNeJcNeD3IzrYypLnr NQG2n4nSntJR8Qpyh4VnlU974HMgHA9t1T7tTw4I7CGSrT7PPCrMOWkXT2O13A== From: "Pankaj Raghav (Samsung)" To: brauner@kernel.org, akpm@linux-foundation.org Cc: chandan.babu@oracle.com, linux-fsdevel@vger.kernel.org, djwong@kernel.org, hare@suse.de, gost.dev@samsung.com, linux-xfs@vger.kernel.org, kernel@pankajraghav.com, hch@lst.de, david@fromorbit.com, Zi Yan , yang@os.amperecomputing.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, willy@infradead.org, john.g.garry@oracle.com, cl@os.amperecomputing.com, p.raghav@samsung.com, mcgrof@kernel.org, ryan.roberts@arm.com, Dave Chinner Subject: [PATCH v13 10/10] xfs: enable block size larger than page size support Date: Thu, 22 Aug 2024 15:50:18 +0200 Message-ID: <20240822135018.1931258-11-kernel@pankajraghav.com> In-Reply-To: <20240822135018.1931258-1-kernel@pankajraghav.com> References: <20240822135018.1931258-1-kernel@pankajraghav.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 4WqPkH1DMrz9swN Content-Type: text/plain; charset="utf-8" From: Pankaj Raghav Page cache now has the ability to have a minimum order when allocating a folio which is a prerequisite to add support for block size > page size. Signed-off-by: Pankaj Raghav Signed-off-by: Luis Chamberlain Reviewed-by: Darrick J. Wong Reviewed-by: Dave Chinner --- fs/xfs/libxfs/xfs_ialloc.c | 5 +++++ fs/xfs/libxfs/xfs_shared.h | 3 +++ fs/xfs/xfs_icache.c | 6 ++++-- fs/xfs/xfs_mount.c | 1 - fs/xfs/xfs_super.c | 28 ++++++++++++++++++++-------- include/linux/pagemap.h | 13 +++++++++++++ 6 files changed, 45 insertions(+), 11 deletions(-) diff --git a/fs/xfs/libxfs/xfs_ialloc.c b/fs/xfs/libxfs/xfs_ialloc.c index 0af5b7a33d055..1921b689888b8 100644 --- a/fs/xfs/libxfs/xfs_ialloc.c +++ b/fs/xfs/libxfs/xfs_ialloc.c @@ -3033,6 +3033,11 @@ xfs_ialloc_setup_geometry( igeo->ialloc_align =3D mp->m_dalign; else igeo->ialloc_align =3D 0; + + if (mp->m_sb.sb_blocksize > PAGE_SIZE) + igeo->min_folio_order =3D mp->m_sb.sb_blocklog - PAGE_SHIFT; + else + igeo->min_folio_order =3D 0; } =20 /* Compute the location of the root directory inode that is laid out by mk= fs. */ diff --git a/fs/xfs/libxfs/xfs_shared.h b/fs/xfs/libxfs/xfs_shared.h index 2f7413afbf46c..33b84a3a83ff6 100644 --- a/fs/xfs/libxfs/xfs_shared.h +++ b/fs/xfs/libxfs/xfs_shared.h @@ -224,6 +224,9 @@ struct xfs_ino_geometry { /* precomputed value for di_flags2 */ uint64_t new_diflags2; =20 + /* minimum folio order of a page cache allocation */ + unsigned int min_folio_order; + }; =20 #endif /* __XFS_SHARED_H__ */ diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c index cf629302d48e7..0fcf235e50235 100644 --- a/fs/xfs/xfs_icache.c +++ b/fs/xfs/xfs_icache.c @@ -88,7 +88,8 @@ xfs_inode_alloc( =20 /* VFS doesn't initialise i_mode! */ VFS_I(ip)->i_mode =3D 0; - mapping_set_large_folios(VFS_I(ip)->i_mapping); + mapping_set_folio_min_order(VFS_I(ip)->i_mapping, + M_IGEO(mp)->min_folio_order); =20 XFS_STATS_INC(mp, vn_active); ASSERT(atomic_read(&ip->i_pincount) =3D=3D 0); @@ -325,7 +326,8 @@ xfs_reinit_inode( inode->i_uid =3D uid; inode->i_gid =3D gid; inode->i_state =3D state; - mapping_set_large_folios(inode->i_mapping); + mapping_set_folio_min_order(inode->i_mapping, + M_IGEO(mp)->min_folio_order); return error; } =20 diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c index 3949f720b5354..c6933440f8066 100644 --- a/fs/xfs/xfs_mount.c +++ b/fs/xfs/xfs_mount.c @@ -134,7 +134,6 @@ xfs_sb_validate_fsb_count( { uint64_t max_bytes; =20 - ASSERT(PAGE_SHIFT >=3D sbp->sb_blocklog); ASSERT(sbp->sb_blocklog >=3D BBSHIFT); =20 if (check_shl_overflow(nblocks, sbp->sb_blocklog, &max_bytes)) diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index 210481b03fdb4..8cd76a01b543f 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -1638,16 +1638,28 @@ xfs_fs_fill_super( goto out_free_sb; } =20 - /* - * Until this is fixed only page-sized or smaller data blocks work. - */ if (mp->m_sb.sb_blocksize > PAGE_SIZE) { - xfs_warn(mp, - "File system with blocksize %d bytes. " - "Only pagesize (%ld) or less will currently work.", + size_t max_folio_size =3D mapping_max_folio_size_supported(); + + if (!xfs_has_crc(mp)) { + xfs_warn(mp, +"V4 Filesystem with blocksize %d bytes. Only pagesize (%ld) or less is sup= ported.", mp->m_sb.sb_blocksize, PAGE_SIZE); - error =3D -ENOSYS; - goto out_free_sb; + error =3D -ENOSYS; + goto out_free_sb; + } + + if (mp->m_sb.sb_blocksize > max_folio_size) { + xfs_warn(mp, +"block size (%u bytes) not supported; Only block size (%ld) or less is sup= ported", + mp->m_sb.sb_blocksize, max_folio_size); + error =3D -ENOSYS; + goto out_free_sb; + } + + xfs_warn(mp, +"EXPERIMENTAL: V5 Filesystem with Large Block Size (%d bytes) enabled.", + mp->m_sb.sb_blocksize); } =20 /* Ensure this filesystem fits in the page cache limits */ diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 4cc170949e9c0..55b254d951da7 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -374,6 +374,19 @@ static inline void mapping_set_gfp_mask(struct address= _space *m, gfp_t mask) #define MAX_XAS_ORDER (XA_CHUNK_SHIFT * 2 - 1) #define MAX_PAGECACHE_ORDER min(MAX_XAS_ORDER, PREFERRED_MAX_PAGECACHE_ORD= ER) =20 +/* + * mapping_max_folio_size_supported() - Check the max folio size supported + * + * The filesystem should call this function at mount time if there is a + * requirement on the folio mapping size in the page cache. + */ +static inline size_t mapping_max_folio_size_supported(void) +{ + if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) + return 1U << (PAGE_SHIFT + MAX_PAGECACHE_ORDER); + return PAGE_SIZE; +} + /* * mapping_set_folio_order_range() - Set the orders supported by a file. * @mapping: The address space of the file. --=20 2.44.1