charge_reserved_hugetlb.sh mounts a hugetlbfs instance at /mnt/huge with
a fixed size of 256M. On systems with large base hugepages (e.g. 512MB),
this is smaller than a single hugepage, so the hugetlbfs mount ends up
with effectively zero capacity (often visible as size=0 in mount output).
As a result, write_to_hugetlbfs fails with ENOMEM and the test can hang
waiting for progress.
--- Error log ---
# uname -r
6.12.0-xxx.el10.aarch64+64k
#./charge_reserved_hugetlb.sh -cgroup-v2
# -----------------------------------------
...
# nr hugepages = 10
# writing cgroup limit: 5368709120
# writing reseravation limit: 5368709120
...
# write_to_hugetlbfs: Error mapping the file: Cannot allocate memory
# Waiting for hugetlb memory reservation to reach size 2684354560.
# 0
# Waiting for hugetlb memory reservation to reach size 2684354560.
# 0
...
# mount |grep /mnt/huge
none on /mnt/huge type hugetlbfs (rw,relatime,seclabel,pagesize=512M,size=0)
# grep -i huge /proc/meminfo
...
HugePages_Total: 10
HugePages_Free: 10
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 524288 kB
Hugetlb: 5242880 kB
Mount hugetlbfs with size=${size}, the number of bytes the test will
reserve/write, so the filesystem capacity is sufficient regardless of
HugeTLB page size.
Signed-off-by: Li Wang <liwang@redhat.com>
Cc: David Hildenbrand <david@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Waiman Long <longman@redhat.com>
---
tools/testing/selftests/mm/charge_reserved_hugetlb.sh | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
index 249a5776c074..ac2744dbc0bd 100755
--- a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
+++ b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
@@ -295,7 +295,7 @@ function run_test() {
setup_cgroup "hugetlb_cgroup_test" "$cgroup_limit" "$reservation_limit"
mkdir -p /mnt/huge
- mount -t hugetlbfs -o pagesize=${MB}M,size=256M none /mnt/huge
+ mount -t hugetlbfs -o pagesize=${MB}M,size=${size} none /mnt/huge
write_hugetlbfs_and_get_usage "hugetlb_cgroup_test" "$size" "$populate" \
"$write" "/mnt/huge/test" "$method" "$private" "$expect_failure" \
@@ -349,7 +349,7 @@ function run_multiple_cgroup_test() {
setup_cgroup "hugetlb_cgroup_test2" "$cgroup_limit2" "$reservation_limit2"
mkdir -p /mnt/huge
- mount -t hugetlbfs -o pagesize=${MB}M,size=256M none /mnt/huge
+ mount -t hugetlbfs -o pagesize=${MB}M,size=${size} none /mnt/huge
write_hugetlbfs_and_get_usage "hugetlb_cgroup_test1" "$size1" \
"$populate1" "$write1" "/mnt/huge/test1" "$method" "$private" \
--
2.49.0
On 12/21/25 09:58, Li Wang wrote: > charge_reserved_hugetlb.sh mounts a hugetlbfs instance at /mnt/huge with > a fixed size of 256M. On systems with large base hugepages (e.g. 512MB), > this is smaller than a single hugepage, so the hugetlbfs mount ends up > with effectively zero capacity (often visible as size=0 in mount output). > > As a result, write_to_hugetlbfs fails with ENOMEM and the test can hang > waiting for progress. I'm curious, what's the history of using "256MB" in the first place (or specifying any size?). -- Cheers David
David Hildenbrand (Red Hat) <david@kernel.org> wrote:
> On 12/21/25 09:58, Li Wang wrote:
> > charge_reserved_hugetlb.sh mounts a hugetlbfs instance at /mnt/huge with
> > a fixed size of 256M. On systems with large base hugepages (e.g. 512MB),
> > this is smaller than a single hugepage, so the hugetlbfs mount ends up
> > with effectively zero capacity (often visible as size=0 in mount output).
> >
> > As a result, write_to_hugetlbfs fails with ENOMEM and the test can hang
> > waiting for progress.
>
> I'm curious, what's the history of using "256MB" in the first place (or
> specifying any size?).
Seems the script initializes it with "256MB" from:
commit 29750f71a9b4cfae57cdddfbd8ca287eddca5503
Author: Mina Almasry <almasrymina@google.com>
Date: Wed Apr 1 21:11:38 2020 -0700
hugetlb_cgroup: add hugetlb_cgroup reservation tests
--
Regards,
Li Wang
On 12/21/25 10:44, Li Wang wrote: > David Hildenbrand (Red Hat) <david@kernel.org> wrote: >> On 12/21/25 09:58, Li Wang wrote: >>> charge_reserved_hugetlb.sh mounts a hugetlbfs instance at /mnt/huge with >>> a fixed size of 256M. On systems with large base hugepages (e.g. 512MB), >>> this is smaller than a single hugepage, so the hugetlbfs mount ends up >>> with effectively zero capacity (often visible as size=0 in mount output). >>> >>> As a result, write_to_hugetlbfs fails with ENOMEM and the test can hang >>> waiting for progress. >> >> I'm curious, what's the history of using "256MB" in the first place (or >> specifying any size?). > > Seems the script initializes it with "256MB" from: > > commit 29750f71a9b4cfae57cdddfbd8ca287eddca5503 > Author: Mina Almasry <almasrymina@google.com> > Date: Wed Apr 1 21:11:38 2020 -0700 > > hugetlb_cgroup: add hugetlb_cgroup reservation tests What would happen if we don't specify a size at all? -- Cheers David
On Sun, Dec 21, 2025 at 5:49 PM David Hildenbrand (Red Hat)
<david@kernel.org> wrote:
>
> On 12/21/25 10:44, Li Wang wrote:
> > David Hildenbrand (Red Hat) <david@kernel.org> wrote:
> >> On 12/21/25 09:58, Li Wang wrote:
> >>> charge_reserved_hugetlb.sh mounts a hugetlbfs instance at /mnt/huge with
> >>> a fixed size of 256M. On systems with large base hugepages (e.g. 512MB),
> >>> this is smaller than a single hugepage, so the hugetlbfs mount ends up
> >>> with effectively zero capacity (often visible as size=0 in mount output).
> >>>
> >>> As a result, write_to_hugetlbfs fails with ENOMEM and the test can hang
> >>> waiting for progress.
> >>
> >> I'm curious, what's the history of using "256MB" in the first place (or
> >> specifying any size?).
> >
> > Seems the script initializes it with "256MB" from:
> >
> > commit 29750f71a9b4cfae57cdddfbd8ca287eddca5503
> > Author: Mina Almasry <almasrymina@google.com>
> > Date: Wed Apr 1 21:11:38 2020 -0700
> >
> > hugetlb_cgroup: add hugetlb_cgroup reservation tests
>
> What would happen if we don't specify a size at all?
It still works well, I have gone through the whole file and
there is no subtest that relies on the 256M capability.
So we could just:
mount -t hugetlbfs -o pagesize=${MB}M none /mnt/huge
--
Regards,
Li Wang
© 2016 - 2026 Red Hat, Inc.