From nobody Mon Feb 9 01:46:40 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 64A47283FC3 for ; Mon, 1 Dec 2025 21:01:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764622919; cv=none; b=Skg6IG42CzIpwcVokvkkNoWPGe8VatH7hzV0psgFeSCVFULyU/lLHoltQCPDuJbzx0NCGFtYh2bzIF7pvfyGkBeFc8JQ5k4w8U70fzipO9sCBsI2it7z27F+LGp2ww6LQ8qXprr6RL9F3ssro+u+/+5NzsmhGxh+POdNFvm3ANA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764622919; c=relaxed/simple; bh=TSjhUrQK4osexZsfhBcSoBUl/lcIsaNtwDeF27hGiGU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Li/IidyffCFu79tUE77MsCi7F1xmKvlYUc78lKaLAYdQAOOzc9M5eMLNYlX6h/UiiejHPinMLVT+IsZ+PjQYipbEFD0ATMLax9vmnZsFlBwqWMmqevYJ4HgH4aiG3TjZlOtMobE+f/5x1dukWMepiOeyAlhzpFGwxKNlXd3dbys= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=rH4NodVe; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="rH4NodVe" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E148CC4CEF1; Mon, 1 Dec 2025 21:01:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1764622919; bh=TSjhUrQK4osexZsfhBcSoBUl/lcIsaNtwDeF27hGiGU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rH4NodVe/BbU+oexIBoGMxXa4yul2HcmP/E0rEL1mlQvVrW+YYjbwO98IE75BhVBQ kU0mbhK+ePjRcwdhQsyEfjun9kEBtd6UJSmZhPtDKENtY9w0/+uxNfyv+ykywTvFpD gbDUjEFjrWPvP8wIBVVopkd8OFc5mUK+fjRkiVUQ8OtD6mOLLKGZ1hYsB/6DJ+pSqB /AZahR5VraMQsf/qXL4EZ9SADBYiPF24f/qfdA0u4SmjZBmnlACh0G2ybixs9iuNV2 lOnwWhvkWv0xOZ6WDBM20lGWMs8EO/iig+dVhoLSyoxJdlZv0dcUfoEw+gTxs+ciYH +cepQRB4NaolQ== From: Jaegeuk Kim To: linux-kernel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-mm@kvack.org, Matthew Wilcox Cc: Jaegeuk Kim Subject: [PATCH 3/4] mm/readahead: add a_ops->ra_folio_order to get a desired folio order Date: Mon, 1 Dec 2025 21:01:26 +0000 Message-ID: <20251201210152.909339-4-jaegeuk@kernel.org> X-Mailer: git-send-email 2.52.0.107.ga0afd4fd5b-goog In-Reply-To: <20251201210152.909339-1-jaegeuk@kernel.org> References: <20251201210152.909339-1-jaegeuk@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This patch introduces a new address operation, a_ops->ra_folio_order(), whi= ch proposes a new folio order based on the adjusted order for page_cache_sync_= ra. Hence, each filesystem can set the desired minimum order of folio allocation when requesting fadvise(POSIX_FADV_WILLNEED). Cc: linux-mm@kvack.org Cc: Matthew Wilcox (Oracle) Signed-off-by: Jaegeuk Kim --- include/linux/fs.h | 4 ++++ include/linux/pagemap.h | 12 ++++++++++++ mm/readahead.c | 6 ++++-- 3 files changed, 20 insertions(+), 2 deletions(-) diff --git a/include/linux/fs.h b/include/linux/fs.h index c895146c1444..ddab68b7e03b 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -472,6 +472,10 @@ struct address_space_operations { void (*is_dirty_writeback) (struct folio *, bool *dirty, bool *wb); int (*error_remove_folio)(struct address_space *, struct folio *); =20 + /* Min folio order to allocate pages. */ + unsigned int (*ra_folio_order)(struct address_space *mapping, + unsigned int order); + /* swapfile support */ int (*swap_activate)(struct swap_info_struct *sis, struct file *file, sector_t *span); diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 09b581c1d878..e1fe07477220 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -476,6 +476,18 @@ mapping_min_folio_order(const struct address_space *ma= pping) return (mapping->flags & AS_FOLIO_ORDER_MIN_MASK) >> AS_FOLIO_ORDER_MIN; } =20 +static inline unsigned int +mapping_ra_folio_order(struct address_space *mapping, unsigned int order) +{ + if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) + return 0; + + if (!mapping->a_ops->ra_folio_order) + return order; + + return mapping->a_ops->ra_folio_order(mapping, order); +} + static inline unsigned long mapping_min_folio_nrpages(const struct address_space *mapping) { diff --git a/mm/readahead.c b/mm/readahead.c index 5beaf7803554..8c7d08af6e00 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -592,8 +592,10 @@ void page_cache_sync_ra(struct readahead_control *ract= l, * A start of file, oversized read, or sequential cache miss: * trivial case: (index - prev_index) =3D=3D 1 * unaligned reads: (index - prev_index) =3D=3D 0 + * if filesystem sets high-order allocation */ - if (!index || req_count > max_pages || index - prev_index <=3D 1UL) { + if (!index || req_count > max_pages || index - prev_index <=3D 1UL || + mapping_ra_folio_order(ractl->mapping, 0)) { ra->start =3D index; ra->size =3D get_init_ra_size(req_count, max_pages); ra->async_size =3D ra->size > req_count ? ra->size - req_count : @@ -627,7 +629,7 @@ void page_cache_sync_ra(struct readahead_control *ractl, ra->size =3D min(contig_count + req_count, max_pages); ra->async_size =3D 1; readit: - ra->order =3D 0; + ra->order =3D mapping_ra_folio_order(ractl->mapping, 0); ractl->_index =3D ra->start; page_cache_ra_order(ractl, ra); } --=20 2.52.0.107.ga0afd4fd5b-goog