charge_reserved_hugetlb.sh mounts a hugetlbfs instance at /mnt/huge with
a fixed size of 256M. On systems with large base hugepages (e.g. 512MB),
this is smaller than a single hugepage, so the hugetlbfs mount ends up
with zero capacity (often visible as size=0 in mount output).
As a result, write_to_hugetlbfs fails with ENOMEM and the test can hang
waiting for progress.
--- Error log ---
# uname -r
6.12.0-xxx.el10.aarch64+64k
#./charge_reserved_hugetlb.sh -cgroup-v2
# -----------------------------------------
...
# nr hugepages = 10
# writing cgroup limit: 5368709120
# writing reseravation limit: 5368709120
...
# write_to_hugetlbfs: Error mapping the file: Cannot allocate memory
# Waiting for hugetlb memory reservation to reach size 2684354560.
# 0
# Waiting for hugetlb memory reservation to reach size 2684354560.
# 0
...
# mount |grep /mnt/huge
none on /mnt/huge type hugetlbfs (rw,relatime,seclabel,pagesize=512M,size=0)
# grep -i huge /proc/meminfo
...
HugePages_Total: 10
HugePages_Free: 10
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 524288 kB
Hugetlb: 5242880 kB
Drop the mount args with 'size=256M', so the filesystem capacity is sufficient
regardless of HugeTLB page size.
Fixes: 29750f71a9 ("hugetlb_cgroup: add hugetlb_cgroup reservation tests")
Signed-off-by: Li Wang <liwang@redhat.com>
Cc: David Hildenbrand <david@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Waiman Long <longman@redhat.com>
---
tools/testing/selftests/mm/charge_reserved_hugetlb.sh | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
index e1fe16bcbbe8..fa6713892d82 100755
--- a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
+++ b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
@@ -290,7 +290,7 @@ function run_test() {
setup_cgroup "hugetlb_cgroup_test" "$cgroup_limit" "$reservation_limit"
mkdir -p /mnt/huge
- mount -t hugetlbfs -o pagesize=${MB}M,size=256M none /mnt/huge
+ mount -t hugetlbfs -o pagesize=${MB}M none /mnt/huge
write_hugetlbfs_and_get_usage "hugetlb_cgroup_test" "$size" "$populate" \
"$write" "/mnt/huge/test" "$method" "$private" "$expect_failure" \
@@ -344,7 +344,7 @@ function run_multiple_cgroup_test() {
setup_cgroup "hugetlb_cgroup_test2" "$cgroup_limit2" "$reservation_limit2"
mkdir -p /mnt/huge
- mount -t hugetlbfs -o pagesize=${MB}M,size=256M none /mnt/huge
+ mount -t hugetlbfs -o pagesize=${MB}M none /mnt/huge
write_hugetlbfs_and_get_usage "hugetlb_cgroup_test1" "$size1" \
"$populate1" "$write1" "/mnt/huge/test1" "$method" "$private" \
--
2.49.0
On 12/21/25 7:26 AM, Li Wang wrote:
> charge_reserved_hugetlb.sh mounts a hugetlbfs instance at /mnt/huge with
> a fixed size of 256M. On systems with large base hugepages (e.g. 512MB),
> this is smaller than a single hugepage, so the hugetlbfs mount ends up
> with zero capacity (often visible as size=0 in mount output).
>
> As a result, write_to_hugetlbfs fails with ENOMEM and the test can hang
> waiting for progress.
>
> --- Error log ---
> # uname -r
> 6.12.0-xxx.el10.aarch64+64k
>
> #./charge_reserved_hugetlb.sh -cgroup-v2
> # -----------------------------------------
> ...
> # nr hugepages = 10
> # writing cgroup limit: 5368709120
> # writing reseravation limit: 5368709120
> ...
> # write_to_hugetlbfs: Error mapping the file: Cannot allocate memory
> # Waiting for hugetlb memory reservation to reach size 2684354560.
> # 0
> # Waiting for hugetlb memory reservation to reach size 2684354560.
> # 0
> ...
>
> # mount |grep /mnt/huge
> none on /mnt/huge type hugetlbfs (rw,relatime,seclabel,pagesize=512M,size=0)
>
> # grep -i huge /proc/meminfo
> ...
> HugePages_Total: 10
> HugePages_Free: 10
> HugePages_Rsvd: 0
> HugePages_Surp: 0
> Hugepagesize: 524288 kB
> Hugetlb: 5242880 kB
>
> Drop the mount args with 'size=256M', so the filesystem capacity is sufficient
> regardless of HugeTLB page size.
>
> Fixes: 29750f71a9 ("hugetlb_cgroup: add hugetlb_cgroup reservation tests")
> Signed-off-by: Li Wang <liwang@redhat.com>
> Cc: David Hildenbrand <david@kernel.org>
> Cc: Mark Brown <broonie@kernel.org>
> Cc: Shuah Khan <shuah@kernel.org>
> Cc: Waiman Long <longman@redhat.com>
> ---
> tools/testing/selftests/mm/charge_reserved_hugetlb.sh | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
> index e1fe16bcbbe8..fa6713892d82 100755
> --- a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
> +++ b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
> @@ -290,7 +290,7 @@ function run_test() {
> setup_cgroup "hugetlb_cgroup_test" "$cgroup_limit" "$reservation_limit"
>
> mkdir -p /mnt/huge
> - mount -t hugetlbfs -o pagesize=${MB}M,size=256M none /mnt/huge
> + mount -t hugetlbfs -o pagesize=${MB}M none /mnt/huge
>
> write_hugetlbfs_and_get_usage "hugetlb_cgroup_test" "$size" "$populate" \
> "$write" "/mnt/huge/test" "$method" "$private" "$expect_failure" \
> @@ -344,7 +344,7 @@ function run_multiple_cgroup_test() {
> setup_cgroup "hugetlb_cgroup_test2" "$cgroup_limit2" "$reservation_limit2"
>
> mkdir -p /mnt/huge
> - mount -t hugetlbfs -o pagesize=${MB}M,size=256M none /mnt/huge
> + mount -t hugetlbfs -o pagesize=${MB}M none /mnt/huge
>
> write_hugetlbfs_and_get_usage "hugetlb_cgroup_test1" "$size1" \
> "$populate1" "$write1" "/mnt/huge/test1" "$method" "$private" \
Acked-by: Waiman Long <longman@redhat.com>
On 12/21/25 13:26, Li Wang wrote:
> charge_reserved_hugetlb.sh mounts a hugetlbfs instance at /mnt/huge with
> a fixed size of 256M. On systems with large base hugepages (e.g. 512MB),
> this is smaller than a single hugepage, so the hugetlbfs mount ends up
> with zero capacity (often visible as size=0 in mount output).
>
> As a result, write_to_hugetlbfs fails with ENOMEM and the test can hang
> waiting for progress.
>
> --- Error log ---
> # uname -r
> 6.12.0-xxx.el10.aarch64+64k
>
> #./charge_reserved_hugetlb.sh -cgroup-v2
> # -----------------------------------------
> ...
> # nr hugepages = 10
> # writing cgroup limit: 5368709120
> # writing reseravation limit: 5368709120
> ...
> # write_to_hugetlbfs: Error mapping the file: Cannot allocate memory
> # Waiting for hugetlb memory reservation to reach size 2684354560.
> # 0
> # Waiting for hugetlb memory reservation to reach size 2684354560.
> # 0
> ...
>
> # mount |grep /mnt/huge
> none on /mnt/huge type hugetlbfs (rw,relatime,seclabel,pagesize=512M,size=0)
>
> # grep -i huge /proc/meminfo
> ...
> HugePages_Total: 10
> HugePages_Free: 10
> HugePages_Rsvd: 0
> HugePages_Surp: 0
> Hugepagesize: 524288 kB
> Hugetlb: 5242880 kB
>
> Drop the mount args with 'size=256M', so the filesystem capacity is sufficient
> regardless of HugeTLB page size.
>
> Fixes: 29750f71a9 ("hugetlb_cgroup: add hugetlb_cgroup reservation tests")
Likely Andrew should add a CC of stable
Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
--
Cheers
David
On Mon, 22 Dec 2025 11:01:17 +0100 "David Hildenbrand (Red Hat)" <david@kernel.org> wrote:
> > Fixes: 29750f71a9 ("hugetlb_cgroup: add hugetlb_cgroup reservation tests")
>
> Likely Andrew should add a CC of stable
>
yep, thanks.
© 2016 - 2026 Red Hat, Inc.