From nobody Fri Dec 19 13:29:07 2025
Received: from mout-p-103.mailbox.org (mout-p-103.mailbox.org [80.241.56.161])
(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
(No client certificate requested)
by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6B02A2957C2;
Sat, 6 Dec 2025 03:09:20 +0000 (UTC)
Authentication-Results: smtp.subspace.kernel.org;
arc=none smtp.client-ip=80.241.56.161
ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116;
t=1764990562; cv=none;
b=LIg/I6hYF3Id2/eHTjQrsoDARckZS2rhYqWLeG4eY4Cp2+b6kRwqQSdA2NV2NlB7CNvy1DElNWhAcZOoHJ3B786PmHzcTFam+fXKr2E1Xi85Qid4XxHRcaNpRoNKqAapNKU3BLQh3BIi000dcLgNmvRDXcnw2JZUvzRxNBb5kNw=
ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org;
s=arc-20240116; t=1764990562; c=relaxed/simple;
bh=zKcXl1TdUErTysLfipxoxhbS0jSvihrz9qcYnUw2dmc=;
h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References:
MIME-Version;
b=OvnopbCZwrD2mRV9f6XlhUaciVe4ZF/xKUr87Fu05Ez8WIbPGVfor/1lkcGJy2bHknB1ig5S90SI8VTCfJhtW1n3eGRg83o8F6E0V7uQqPDvMHXpYDzmc6pvTB3ZWco+ckPTXacZOXxOaYuI6WEI44DG5vMJrx+GBopYiaTUOjw=
ARC-Authentication-Results: i=1; smtp.subspace.kernel.org;
dmarc=fail (p=none dis=none) header.from=samsung.com;
spf=pass smtp.mailfrom=pankajraghav.com;
arc=none smtp.client-ip=80.241.56.161
Authentication-Results: smtp.subspace.kernel.org;
dmarc=fail (p=none dis=none) header.from=samsung.com
Authentication-Results: smtp.subspace.kernel.org;
spf=pass smtp.mailfrom=pankajraghav.com
Received: from smtp2.mailbox.org (smtp2.mailbox.org [10.196.197.2])
(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest
SHA256)
(No client certificate requested)
by mout-p-103.mailbox.org (Postfix) with ESMTPS id 4dNY9z2JYNz9tv1;
Sat, 6 Dec 2025 04:09:11 +0100 (CET)
From: Pankaj Raghav
To: Suren Baghdasaryan ,
Mike Rapoport ,
David Hildenbrand ,
Ryan Roberts ,
Michal Hocko ,
Lance Yang ,
Lorenzo Stoakes ,
Baolin Wang ,
Dev Jain ,
Barry Song ,
Andrew Morton ,
Nico Pache ,
Zi Yan ,
Vlastimil Babka ,
"Liam R . Howlett" ,
Jens Axboe
Cc: linux-kernel@vger.kernel.org,
linux-mm@kvack.org,
linux-block@vger.kernel.org,
linux-fsdevel@vger.kernel.org,
mcgrof@kernel.org,
gost.dev@samsung.com,
kernel@pankajraghav.com,
tytso@mit.edu,
Pankaj Raghav
Subject: [RFC v2 1/3] filemap: set max order to be min order if THP is
disabled
Date: Sat, 6 Dec 2025 04:08:56 +0100
Message-ID: <20251206030858.1418814-2-p.raghav@samsung.com>
In-Reply-To: <20251206030858.1418814-1-p.raghav@samsung.com>
References: <20251206030858.1418814-1-p.raghav@samsung.com>
Precedence: bulk
X-Mailing-List: linux-kernel@vger.kernel.org
List-Id:
List-Subscribe:
List-Unsubscribe:
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="utf-8"
Large folios in the page cache depend on the splitting infrastructure from
THP. To remove the dependency between large folios and
CONFIG_TRANSPARENT_HUGEPAGE, set the min order =3D=3D max order if THP is
disabled. This will make sure the splitting code will not be required
when THP is disabled, therefore, removing the dependency between large
folios and THP.
Signed-off-by: Pankaj Raghav
---
include/linux/pagemap.h | 17 ++++++-----------
1 file changed, 6 insertions(+), 11 deletions(-)
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 09b581c1d878..1bb0d4432d4b 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -397,9 +397,7 @@ static inline void mapping_set_gfp_mask(struct address_=
space *m, gfp_t mask)
*/
static inline size_t mapping_max_folio_size_supported(void)
{
- if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE))
- return 1U << (PAGE_SHIFT + MAX_PAGECACHE_ORDER);
- return PAGE_SIZE;
+ return 1U << (PAGE_SHIFT + MAX_PAGECACHE_ORDER);
}
=20
/*
@@ -422,16 +420,17 @@ static inline void mapping_set_folio_order_range(stru=
ct address_space *mapping,
unsigned int min,
unsigned int max)
{
- if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE))
- return;
-
if (min > MAX_PAGECACHE_ORDER)
min =3D MAX_PAGECACHE_ORDER;
=20
if (max > MAX_PAGECACHE_ORDER)
max =3D MAX_PAGECACHE_ORDER;
=20
- if (max < min)
+ /* Large folios depend on THP infrastructure for splitting.
+ * If THP is disabled, we cap the max order to min order to avoid
+ * splitting the folios.
+ */
+ if ((max < min) || !IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE))
max =3D min;
=20
mapping->flags =3D (mapping->flags & ~AS_FOLIO_ORDER_MASK) |
@@ -463,16 +462,12 @@ static inline void mapping_set_large_folios(struct ad=
dress_space *mapping)
static inline unsigned int
mapping_max_folio_order(const struct address_space *mapping)
{
- if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE))
- return 0;
return (mapping->flags & AS_FOLIO_ORDER_MAX_MASK) >> AS_FOLIO_ORDER_MAX;
}
=20
static inline unsigned int
mapping_min_folio_order(const struct address_space *mapping)
{
- if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE))
- return 0;
return (mapping->flags & AS_FOLIO_ORDER_MIN_MASK) >> AS_FOLIO_ORDER_MIN;
}
=20
--=20
2.50.1
From nobody Fri Dec 19 13:29:07 2025
Received: from mout-p-101.mailbox.org (mout-p-101.mailbox.org [80.241.56.151])
(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
(No client certificate requested)
by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9EF5728CF7C;
Sat, 6 Dec 2025 03:09:26 +0000 (UTC)
Authentication-Results: smtp.subspace.kernel.org;
arc=none smtp.client-ip=80.241.56.151
ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116;
t=1764990568; cv=none;
b=eqVS7wZ6XXIT4bynIoUhsPFmMMz4URm4v3WwqPvieN3Z2m1WmXzhxW3W59AXrRCNaTZQYTAH0S4Tljcibg454g+YmD2INHxUJFKEMGfETN4okzbO35x5bMadavjGOVtURpmeqhPMamFSJ0RBbPSqg+cBGA1x2kHwd3Eg0IFnyvA=
ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org;
s=arc-20240116; t=1764990568; c=relaxed/simple;
bh=a487cd5EpadzxqRhBf1PDfusbx9jxsmOC9FDsmizOuc=;
h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References:
MIME-Version;
b=ucGW+P4rQC8AdKBsWBBYFYWqXHhkfs67Sebbf/0VA0kcjiZ6pAlvuNIVgQj5mllR1RqbhRx7XVzUDBTre07wDXCeu5YnU7g3ASWCE9wkQladOrqzXiQ4aHKxC1LkUCq12QeFBDhsvCqAAXAqsFhkZIu1O3UEbY3vXW9XUGHIEFc=
ARC-Authentication-Results: i=1; smtp.subspace.kernel.org;
dmarc=fail (p=none dis=none) header.from=samsung.com;
spf=pass smtp.mailfrom=pankajraghav.com;
arc=none smtp.client-ip=80.241.56.151
Authentication-Results: smtp.subspace.kernel.org;
dmarc=fail (p=none dis=none) header.from=samsung.com
Authentication-Results: smtp.subspace.kernel.org;
spf=pass smtp.mailfrom=pankajraghav.com
Received: from smtp202.mailbox.org (smtp202.mailbox.org
[IPv6:2001:67c:2050:b231:465::202])
(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest
SHA256)
(No client certificate requested)
by mout-p-101.mailbox.org (Postfix) with ESMTPS id 4dNYB4093Rz9v56;
Sat, 6 Dec 2025 04:09:16 +0100 (CET)
Authentication-Results: outgoing_mbo_mout;
dkim=none;
spf=pass (outgoing_mbo_mout: domain of kernel@pankajraghav.com designates
2001:67c:2050:b231:465::202 as permitted sender)
smtp.mailfrom=kernel@pankajraghav.com
From: Pankaj Raghav
To: Suren Baghdasaryan ,
Mike Rapoport ,
David Hildenbrand ,
Ryan Roberts ,
Michal Hocko ,
Lance Yang ,
Lorenzo Stoakes ,
Baolin Wang ,
Dev Jain ,
Barry Song ,
Andrew Morton ,
Nico Pache ,
Zi Yan ,
Vlastimil Babka ,
"Liam R . Howlett" ,
Jens Axboe
Cc: linux-kernel@vger.kernel.org,
linux-mm@kvack.org,
linux-block@vger.kernel.org,
linux-fsdevel@vger.kernel.org,
mcgrof@kernel.org,
gost.dev@samsung.com,
kernel@pankajraghav.com,
tytso@mit.edu,
Pankaj Raghav
Subject: [RFC v2 2/3] huge_memory: skip warning if min order and folio order
are same in split
Date: Sat, 6 Dec 2025 04:08:57 +0100
Message-ID: <20251206030858.1418814-3-p.raghav@samsung.com>
In-Reply-To: <20251206030858.1418814-1-p.raghav@samsung.com>
References: <20251206030858.1418814-1-p.raghav@samsung.com>
Precedence: bulk
X-Mailing-List: linux-kernel@vger.kernel.org
List-Id:
List-Subscribe:
List-Unsubscribe:
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
X-Rspamd-Queue-Id: 4dNYB4093Rz9v56
Content-Type: text/plain; charset="utf-8"
When THP is disabled, file-backed large folios max order is capped to the
min order to avoid using the splitting infrastructure.
Currently, splitting calls will create a warning when called with THP
disabled. But splitting call does not have to do anything when min order
is same as the folio order.
So skip the warning in folio split functions if the min order is same as
the folio order for file backed folios.
Due to issues with circular dependency, move the definition of split
function for !CONFIG_TRANSPARENT_HUGEPAGES to mm/memory.c
Signed-off-by: Pankaj Raghav
---
include/linux/huge_mm.h | 40 ++++++++--------------------------------
mm/memory.c | 41 +++++++++++++++++++++++++++++++++++++++++
2 files changed, 49 insertions(+), 32 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 21162493a0a0..71e309f2d26a 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -612,42 +612,18 @@ can_split_folio(struct folio *folio, int caller_pins,=
int *pextra_pins)
{
return false;
}
-static inline int
-split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
- unsigned int new_order)
-{
- VM_WARN_ON_ONCE_PAGE(1, page);
- return -EINVAL;
-}
-static inline int split_huge_page_to_order(struct page *page, unsigned int=
new_order)
-{
- VM_WARN_ON_ONCE_PAGE(1, page);
- return -EINVAL;
-}
+int split_huge_page_to_list_to_order(struct page *page, struct list_head *=
list,
+ unsigned int new_order);
+int split_huge_page_to_order(struct page *page, unsigned int new_order);
static inline int split_huge_page(struct page *page)
{
- VM_WARN_ON_ONCE_PAGE(1, page);
- return -EINVAL;
-}
-
-static inline unsigned int min_order_for_split(struct folio *folio)
-{
- VM_WARN_ON_ONCE_FOLIO(1, folio);
- return 0;
-}
-
-static inline int split_folio_to_list(struct folio *folio, struct list_hea=
d *list)
-{
- VM_WARN_ON_ONCE_FOLIO(1, folio);
- return -EINVAL;
+ return split_huge_page_to_list_to_order(page, NULL, 0);
}
=20
-static inline int try_folio_split_to_order(struct folio *folio,
- struct page *page, unsigned int new_order)
-{
- VM_WARN_ON_ONCE_FOLIO(1, folio);
- return -EINVAL;
-}
+unsigned int min_order_for_split(struct folio *folio);
+int split_folio_to_list(struct folio *folio, struct list_head *list);
+int try_folio_split_to_order(struct folio *folio,
+ struct page *page, unsigned int new_order);
=20
static inline void deferred_split_folio(struct folio *folio, bool partiall=
y_mapped) {}
static inline void reparent_deferred_split_queue(struct mem_cgroup *memcg)=
{}
diff --git a/mm/memory.c b/mm/memory.c
index 6675e87eb7dd..4eccdf72a46e 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4020,6 +4020,47 @@ static bool __wp_can_reuse_large_anon_folio(struct f=
olio *folio,
{
BUILD_BUG();
}
+
+int split_huge_page_to_list_to_order(struct page *page, struct list_head *=
list,
+ unsigned int new_order)
+{
+ struct folio *folio =3D page_folio(page);
+ unsigned int order =3D mapping_min_folio_order(folio->mapping);
+
+ if (!folio_test_anon(folio) && order =3D=3D folio_order(folio))
+ return -EINVAL;
+
+ VM_WARN_ON_ONCE_PAGE(1, page);
+ return -EINVAL;
+}
+
+int split_huge_page_to_order(struct page *page, unsigned int new_order)
+{
+ return split_huge_page_to_list_to_order(page, NULL, new_order);
+}
+
+int split_folio_to_list(struct folio *folio, struct list_head *list)
+{
+ unsigned int order =3D mapping_min_folio_order(folio->mapping);
+
+ if (!folio_test_anon(folio) && order =3D=3D folio_order(folio))
+ return -EINVAL;
+
+ VM_WARN_ON_ONCE_FOLIO(1, folio);
+ return -EINVAL;
+}
+
+unsigned int min_order_for_split(struct folio *folio)
+{
+ return split_folio_to_list(folio, NULL);
+}
+
+
+int try_folio_split_to_order(struct folio *folio, struct page *page,
+ unsigned int new_order)
+{
+ return split_folio_to_list(folio, NULL);
+}
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
=20
static bool wp_can_reuse_anon_folio(struct folio *folio,
--=20
2.50.1
From nobody Fri Dec 19 13:29:07 2025
Received: from mout-p-202.mailbox.org (mout-p-202.mailbox.org [80.241.56.172])
(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
(No client certificate requested)
by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7A85E29B8E0;
Sat, 6 Dec 2025 03:09:30 +0000 (UTC)
Authentication-Results: smtp.subspace.kernel.org;
arc=none smtp.client-ip=80.241.56.172
ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116;
t=1764990572; cv=none;
b=Oir3jwF+EKZjis8abx9d2pt7DrYvcuPtv6bicqqr5Iw1F0L+mj8XtJk7grDuBBvwlRCbSuYknN1DLgIALStx13t3jiGo64osBc8OYreCne4ha5wJ4TkjQPqdV0ZRi6aE/ocffRUApfMm48st0DwspxOWeGtPc5ksjcQiHHRbGOU=
ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org;
s=arc-20240116; t=1764990572; c=relaxed/simple;
bh=ZwWMNc0jIccxD47ADxeC9Ru0lObhdgw3kfExPBKkTNs=;
h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References:
MIME-Version;
b=ki8V7twgVtTAAz3qv922ZDcX4bMbx6bpJUyF2rjKSJiO6f+OqHEt6NIxpU1DGzywInihz5XHV8aL2JaNIdPuzd8SDFLQhFMeSd1NVVAjySU13dfppZL4mIFRU1yUxbMzRAHs4qkDcdJLmSAJ0iNWGmpK6BdU3wowH1gSZJRiX+w=
ARC-Authentication-Results: i=1; smtp.subspace.kernel.org;
dmarc=fail (p=none dis=none) header.from=samsung.com;
spf=pass smtp.mailfrom=pankajraghav.com;
arc=none smtp.client-ip=80.241.56.172
Authentication-Results: smtp.subspace.kernel.org;
dmarc=fail (p=none dis=none) header.from=samsung.com
Authentication-Results: smtp.subspace.kernel.org;
spf=pass smtp.mailfrom=pankajraghav.com
Received: from smtp202.mailbox.org (smtp202.mailbox.org [10.196.197.202])
(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest
SHA256)
(No client certificate requested)
by mout-p-202.mailbox.org (Postfix) with ESMTPS id 4dNYB91xC6z9tqH;
Sat, 6 Dec 2025 04:09:21 +0100 (CET)
From: Pankaj Raghav
To: Suren Baghdasaryan ,
Mike Rapoport ,
David Hildenbrand ,
Ryan Roberts ,
Michal Hocko ,
Lance Yang ,
Lorenzo Stoakes ,
Baolin Wang ,
Dev Jain ,
Barry Song ,
Andrew Morton ,
Nico Pache ,
Zi Yan ,
Vlastimil Babka ,
"Liam R . Howlett" ,
Jens Axboe
Cc: linux-kernel@vger.kernel.org,
linux-mm@kvack.org,
linux-block@vger.kernel.org,
linux-fsdevel@vger.kernel.org,
mcgrof@kernel.org,
gost.dev@samsung.com,
kernel@pankajraghav.com,
tytso@mit.edu,
Pankaj Raghav
Subject: [RFC v2 3/3] blkdev: remove CONFIG_TRANSPARENT_HUGEPAGES dependency
for LBS devices
Date: Sat, 6 Dec 2025 04:08:58 +0100
Message-ID: <20251206030858.1418814-4-p.raghav@samsung.com>
In-Reply-To: <20251206030858.1418814-1-p.raghav@samsung.com>
References: <20251206030858.1418814-1-p.raghav@samsung.com>
Precedence: bulk
X-Mailing-List: linux-kernel@vger.kernel.org
List-Id:
List-Subscribe:
List-Unsubscribe:
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="utf-8"
Now that dependency between CONFIG_TRANSPARENT_HUGEPAGES and large
folios are removed, enable LBS devices even when THP config is disabled.
Signed-off-by: Pankaj Raghav
---
include/linux/blkdev.h | 5 -----
1 file changed, 5 deletions(-)
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 70b671a9a7f7..b6379d73f546 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -270,16 +270,11 @@ static inline dev_t disk_devt(struct gendisk *disk)
return MKDEV(disk->major, disk->first_minor);
}
=20
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
/*
* We should strive for 1 << (PAGE_SHIFT + MAX_PAGECACHE_ORDER)
* however we constrain this to what we can validate and test.
*/
#define BLK_MAX_BLOCK_SIZE SZ_64K
-#else
-#define BLK_MAX_BLOCK_SIZE PAGE_SIZE
-#endif
-
=20
/* blk_validate_limits() validates bsize, so drivers don't usually need to=
*/
static inline int blk_validate_block_size(unsigned long bsize)
--=20
2.50.1