mm/hugetlb.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
Currently, gigantic hugepages cannot use the overcommit mechanism
(nr_overcommit_hugepages), forcing users to permanently reserve memory via
nr_hugepages even when pages might not be actively used.
Remove this blanket restriction on gigantic hugepage overcommit.
This will bring the same benefits to gigantic pages as hugepages:
- Memory is only taken out of regular use when actually needed
- Unused surplus pages can be returned to the system
- Better memory utilization, especially with CMA backing which can
significantly increase the changes of hugepage allocation
Without this patch:
echo 3 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_overcommit_hugepages
bash: echo: write error: Invalid argument
With this patch:
echo 3 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_overcommit_hugepages
./mmap_hugetlb_test
Successfully allocated huge pages at address: 0x7f9d40000000
cat mmap_hugetlb_test.c
...
unsigned long ALLOC_SIZE = 3 * (unsigned long) HUGE_PAGE_SIZE;
addr = mmap(NULL,
ALLOC_SIZE, // 3GB
PROT_READ | PROT_WRITE,
MAP_PRIVATE | MAP_ANONYMOUS | MAP_HUGETLB | MAP_HUGE_1GB,
-1,
0);
if (addr == MAP_FAILED) {
fprintf(stderr, "mmap failed: %s\n", strerror(errno));
return 1;
}
printf("Successfully allocated huge pages at address: %p\n", addr);
...
Signed-off-by: Usama Arif <usamaarif642@gmail.com>
---
mm/hugetlb.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index c07b7192aff26..93d0f8ae1fe84 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2232,7 +2232,7 @@ static struct folio *alloc_surplus_hugetlb_folio(struct hstate *h,
{
struct folio *folio = NULL;
- if (hstate_is_gigantic(h))
+ if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
return NULL;
spin_lock_irq(&hugetlb_lock);
@@ -4294,7 +4294,7 @@ static ssize_t nr_overcommit_hugepages_store(struct kobject *kobj,
unsigned long input;
struct hstate *h = kobj_to_hstate(kobj, NULL);
- if (hstate_is_gigantic(h))
+ if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
return -EINVAL;
err = kstrtoul(buf, 10, &input);
@@ -5181,7 +5181,7 @@ static int hugetlb_overcommit_handler(const struct ctl_table *table, int write,
tmp = h->nr_overcommit_huge_pages;
- if (write && hstate_is_gigantic(h))
+ if (write && hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
return -EINVAL;
ret = proc_hugetlb_doulongvec_minmax(table, write, buffer, length, ppos,
--
2.47.3
On 06.10.25 20:56, Usama Arif wrote:
> Currently, gigantic hugepages cannot use the overcommit mechanism
> (nr_overcommit_hugepages), forcing users to permanently reserve memory via
> nr_hugepages even when pages might not be actively used.
>
> Remove this blanket restriction on gigantic hugepage overcommit.
> This will bring the same benefits to gigantic pages as hugepages:
>
> - Memory is only taken out of regular use when actually needed
> - Unused surplus pages can be returned to the system
> - Better memory utilization, especially with CMA backing which can
> significantly increase the changes of hugepage allocation
>
> Without this patch:
> echo 3 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_overcommit_hugepages
> bash: echo: write error: Invalid argument
>
> With this patch:
> echo 3 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_overcommit_hugepages
> ./mmap_hugetlb_test
> Successfully allocated huge pages at address: 0x7f9d40000000
>
> cat mmap_hugetlb_test.c
> ...
> unsigned long ALLOC_SIZE = 3 * (unsigned long) HUGE_PAGE_SIZE;
> addr = mmap(NULL,
> ALLOC_SIZE, // 3GB
> PROT_READ | PROT_WRITE,
> MAP_PRIVATE | MAP_ANONYMOUS | MAP_HUGETLB | MAP_HUGE_1GB,
> -1,
> 0);
>
> if (addr == MAP_FAILED) {
> fprintf(stderr, "mmap failed: %s\n", strerror(errno));
> return 1;
> }
> printf("Successfully allocated huge pages at address: %p\n", addr);
> ...
>
> Signed-off-by: Usama Arif <usamaarif642@gmail.com>
> ---
No opinion from my side. I guess it won't harm anybody (but people
should be aware that "overcommit" with huge pages where we have no
allocation guarantees is a flawed concept).
--
Cheers
David / dhildenb
On Mon, 6 Oct 2025 19:56:07 +0100 Usama Arif <usamaarif642@gmail.com> wrote: > Currently, gigantic hugepages cannot use the overcommit mechanism > (nr_overcommit_hugepages), forcing users to permanently reserve memory via > nr_hugepages even when pages might not be actively used. > Why did we do that? Just an oversight? > - if (hstate_is_gigantic(h)) > + if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) > - if (hstate_is_gigantic(h)) > + if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) > - if (write && hstate_is_gigantic(h)) > + if (write && hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) Maybe a little helper for this? (Little helpers are nice sites for code comments!)
On 07/10/2025 23:24, Andrew Morton wrote: > On Mon, 6 Oct 2025 19:56:07 +0100 Usama Arif <usamaarif642@gmail.com> wrote: > >> Currently, gigantic hugepages cannot use the overcommit mechanism >> (nr_overcommit_hugepages), forcing users to permanently reserve memory via >> nr_hugepages even when pages might not be actively used. >> > > Why did we do that? Just an oversight? I believe this restriction was added in 2011 [1], which was before there was support for reserving 1G hugepages at runtime. Once that was added, I think we forgot to remove this restriction. [1] https://git.zx2c4.com/linux-rng/commit/mm/hugetlb.c?id=adbe8726dc2a3805630d517270db17e3af86e526 > >> - if (hstate_is_gigantic(h)) >> + if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) > >> - if (hstate_is_gigantic(h)) >> + if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) > >> - if (write && hstate_is_gigantic(h)) >> + if (write && hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) > > Maybe a little helper for this? > > (Little helpers are nice sites for code comments!) Will add this in the next revision, Along with a paragraph about the history of why this restriction existed in the first place. Thanks for the review! Usama
© 2016 - 2025 Red Hat, Inc.