From nobody Mon May 11 07:47:23 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C57DEC433EF for ; Tue, 12 Apr 2022 12:12:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350280AbiDLMO5 (ORCPT ); Tue, 12 Apr 2022 08:14:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41962 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354425AbiDLMOG (ORCPT ); Tue, 12 Apr 2022 08:14:06 -0400 Received: from mail-pg1-x52a.google.com (mail-pg1-x52a.google.com [IPv6:2607:f8b0:4864:20::52a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 808BB85952 for ; Tue, 12 Apr 2022 04:15:06 -0700 (PDT) Received: by mail-pg1-x52a.google.com with SMTP id s21so12424237pgv.13 for ; Tue, 12 Apr 2022 04:15:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cGvHlpqOyWmrThvVtDXdOJEGvDkWgL4hlnS8c7l/1q4=; b=XRCzREANSjCoRhwvwtzhbFT/7sWxP6MhWTse7rJnbP+kIveoVTCWmnX/RahjTJeP5D eX5u6qyVSX1JFNDoIOHl9fSt31jhfat1X4HM6/dAFjGU2sz3M6dAhkjVp8ySdx3GjbcW CSkcUFAJfoAbnBe3P59WMQR/j0lOxsNuGUoy8QRS+bKS7dzmRO5ftWbAI27xsRRNruQu BZRlY5ri8DRp7W8LughczdfCIn8wy9nqaTENqQGKRwLqHb3ws0pA+ilvQpqtrfOlF6+t TfS77fNS3QpSIyy6rzpWMQYqZSTWjW08YILN5ahfU+/wNJnUMnJ6Suk+VbpsbHxMs0Zr Zthw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cGvHlpqOyWmrThvVtDXdOJEGvDkWgL4hlnS8c7l/1q4=; b=Z110XHltXeVCSxGd/CyGPwcuAMk3QbS01gKrncNCsguQBwGvH87qhS2J8WX+mzuLE9 yPJZJuT4JgIijOVvYWWBucCHZGdnvlb0a/5g2nrOgl0XMNo3hVXEIGxZH/uunBgiGgr1 FknQwOypDaprr9fIkNjBqTjvkB7KA9qbHjKTjCF/yop0EqaLM3qrG3gtZlzXw34x9ppF 02S5TQ4jy+YaLa7K/aCP8klCDONiyKZJtNo7fmYO2Dq+vX+CMmmaU5D+HYOP1ToKt40I OBQvDkw+TD1CCeUTXqM66L1XKQ7hHOBf70eDpBk6G19zA2F2MA+ejeiJx7+076aw6kPj J5Ow== X-Gm-Message-State: AOAM532bY54j/iuX7DamQYnHUX2UStp1AI353oCecOTbDrvsmhjQ6dOE V3QnjoSgTQliTAhzTEuwx+vNgw== X-Google-Smtp-Source: ABdhPJyqgYwh+HcY/ipT3k10fjoiHOhWnLnKSBHPvmCUUAiOyumwr2SarcrOYKYxaoUlCOZ6PA8NjA== X-Received: by 2002:a05:6a00:1488:b0:505:9dfa:1a09 with SMTP id v8-20020a056a00148800b005059dfa1a09mr16104258pfu.43.1649762105908; Tue, 12 Apr 2022 04:15:05 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([139.177.225.229]) by smtp.gmail.com with ESMTPSA id l25-20020a635719000000b0039da6cdf82dsm402507pgb.83.2022.04.12.04.15.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Apr 2022 04:15:05 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, akpm@linux-foundation.org, mcgrof@kernel.org, keescook@chromium.org, yzaikin@google.com, osalvador@suse.de, david@redhat.com, masahiroy@kernel.org Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v7 1/4] mm: hugetlb_vmemmap: introduce CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP Date: Tue, 12 Apr 2022 19:14:31 +0800 Message-Id: <20220412111434.96498-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220412111434.96498-1-songmuchun@bytedance.com> References: <20220412111434.96498-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" If the size of "struct page" is not the power of two but with the feature of minimizing overhead of struct page associated with each HugeTLB is enabled, then the vmemmap pages of HugeTLB will be corrupted after remapping (panic is about to happen in theory). But this only exists when !CONFIG_MEMCG && !CONFIG_SLUB on x86_64. However, it is not a conventional configuration nowadays. So it is not a real word issue, just the result of a code review. But we have to prevent anyone from configuring that combined configurations. In order to avoid many checks like "is_power_of_2 (sizeof(struct page))" through mm/hugetlb_vmemmap.c. Introduce a new macro CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP to represent the size of struct page is power of two and CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP is configured. Then make the codes of this feature depends on this new macro. Then we could prevent anyone do any unexpected configurations. A new autoconf_ext.h is introduced as well, which serves as an extension for autoconf.h since those special configurations (e.g. CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP here) are rely on the autoconf.h (generated from Kconfig), so we cannot embed those configurations into Kconfig. After this change, it would be easy if someone want to do the similar thing (add a new CONFIG) in the future. Signed-off-by: Muchun Song Suggested-by: Luis Chamberlain Reported-by: kernel test robot --- Kbuild | 19 +++++++++++++++++++ arch/x86/mm/init_64.c | 2 +- include/linux/hugetlb.h | 2 +- include/linux/kconfig.h | 4 ++++ include/linux/mm.h | 2 +- include/linux/page-flags.h | 2 +- kernel/autoconf_ext.c | 26 ++++++++++++++++++++++++++ mm/hugetlb_vmemmap.c | 8 ++------ mm/hugetlb_vmemmap.h | 4 ++-- mm/sparse-vmemmap.c | 4 ++-- scripts/mod/Makefile | 2 ++ 11 files changed, 61 insertions(+), 14 deletions(-) create mode 100644 kernel/autoconf_ext.c diff --git a/Kbuild b/Kbuild index fa441b98c9f6..83c0d5a418d1 100644 --- a/Kbuild +++ b/Kbuild @@ -2,6 +2,12 @@ # # Kbuild for top-level directory of the kernel =20 +# autoconf_ext.h is generated last since it depends on other generated hea= ders, +# however those other generated headers may include autoconf_ext.h. Use the +# following macro to avoid circular dependency. + +KBUILD_CFLAGS_KERNEL +=3D -D__EXCLUDE_AUTOCONF_EXT_H + ##### # Generate bounds.h =20 @@ -37,6 +43,19 @@ $(offsets-file): arch/$(SRCARCH)/kernel/asm-offsets.s FO= RCE $(call filechk,offsets,__ASM_OFFSETS_H__) =20 ##### +# Generate autoconf_ext.h. + +autoconf_ext-file :=3D include/generated/autoconf_ext.h + +always-y +=3D $(autoconf_ext-file) +targets +=3D kernel/autoconf_ext.s + +kernel/autoconf_ext.s: $(bounds-file) $(timeconst-file) $(offsets-file) + +$(autoconf_ext-file): kernel/autoconf_ext.s FORCE + $(call filechk,offsets,__LINUX_AUTOCONF_EXT_H__) + +##### # Check for missing system calls =20 always-y +=3D missing-syscalls diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 4b9e0012bbbf..9b8dfa6e4da8 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1268,7 +1268,7 @@ static struct kcore_list kcore_vsyscall; =20 static void __init register_page_bootmem_info(void) { -#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP) +#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMM= AP) int i; =20 for_each_online_node(i) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index ac2ece9e9c79..d42de8abd2b6 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -623,7 +623,7 @@ struct hstate { unsigned int nr_huge_pages_node[MAX_NUMNODES]; unsigned int free_huge_pages_node[MAX_NUMNODES]; unsigned int surplus_huge_pages_node[MAX_NUMNODES]; -#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP +#ifdef CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP unsigned int optimize_vmemmap_pages; #endif #ifdef CONFIG_CGROUP_HUGETLB diff --git a/include/linux/kconfig.h b/include/linux/kconfig.h index 20d1079e92b4..f2cb8be6d8d0 100644 --- a/include/linux/kconfig.h +++ b/include/linux/kconfig.h @@ -4,6 +4,10 @@ =20 #include =20 +#ifndef __EXCLUDE_AUTOCONF_EXT_H +#include +#endif + #ifdef CONFIG_CPU_BIG_ENDIAN #define __BIG_ENDIAN 4321 #else diff --git a/include/linux/mm.h b/include/linux/mm.h index e0ad13486035..4c36f77a5745 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3186,7 +3186,7 @@ static inline void print_vma_addr(char *prefix, unsig= ned long rip) } #endif =20 -#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP +#ifdef CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP int vmemmap_remap_free(unsigned long start, unsigned long end, unsigned long reuse); int vmemmap_remap_alloc(unsigned long start, unsigned long end, diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index b70124b9c7c1..e409b10cd677 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -199,7 +199,7 @@ enum pageflags { =20 #ifndef __GENERATING_BOUNDS_H =20 -#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP +#ifdef CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP DECLARE_STATIC_KEY_MAYBE(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON, hugetlb_optimize_vmemmap_key); =20 diff --git a/kernel/autoconf_ext.c b/kernel/autoconf_ext.c new file mode 100644 index 000000000000..8475735c6fc9 --- /dev/null +++ b/kernel/autoconf_ext.c @@ -0,0 +1,26 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Generate definitions needed by the preprocessor. + * This code generates raw asm output which is post-processed + * to extract and format the required data. + */ +#include +#include +#include + +int main(void) +{ + if (IS_ENABLED(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP) && + is_power_of_2(sizeof(struct page))) { + /* + * The 2nd parameter of DEFINE() will go into the comments. Do + * not pass 1 directly to it to make the generated macro more + * clear for the readers. + */ + DEFINE(CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP, + IS_ENABLED(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP) && + is_power_of_2(sizeof(struct page))); + } + + return 0; +} diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 2655434a946b..be73782cc1cf 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -178,6 +178,7 @@ =20 #include "hugetlb_vmemmap.h" =20 +#ifdef CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP /* * There are a lot of struct page structures associated with each HugeTLB = page. * For tail pages, the value of compound_head is the same. So we can reuse= first @@ -194,12 +195,6 @@ EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key); =20 static int __init hugetlb_vmemmap_early_param(char *buf) { - /* We cannot optimize if a "struct page" crosses page boundaries. */ - if (!is_power_of_2(sizeof(struct page))) { - pr_warn("cannot free vmemmap pages because \"struct page\" crosses page = boundaries\n"); - return 0; - } - if (!buf) return -EINVAL; =20 @@ -300,3 +295,4 @@ void __init hugetlb_vmemmap_init(struct hstate *h) pr_info("can optimize %d vmemmap pages for %s\n", h->optimize_vmemmap_pages, h->name); } +#endif /* CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP */ diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 109b0a53b6fe..3afae3ff37fa 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -10,7 +10,7 @@ #define _LINUX_HUGETLB_VMEMMAP_H #include =20 -#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP +#ifdef CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP int hugetlb_vmemmap_alloc(struct hstate *h, struct page *head); void hugetlb_vmemmap_free(struct hstate *h, struct page *head); void hugetlb_vmemmap_init(struct hstate *h); @@ -41,5 +41,5 @@ static inline unsigned int hugetlb_optimize_vmemmap_pages= (struct hstate *h) { return 0; } -#endif /* CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP */ +#endif /* CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP */ #endif /* _LINUX_HUGETLB_VMEMMAP_H */ diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 52f36527bab3..6c7f1a9ce2dd 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -34,7 +34,7 @@ #include #include =20 -#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP +#ifdef CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP /** * struct vmemmap_remap_walk - walk vmemmap page table * @@ -420,7 +420,7 @@ int vmemmap_remap_alloc(unsigned long start, unsigned l= ong end, =20 return 0; } -#endif /* CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP */ +#endif /* CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP */ =20 /* * Allocate a block of memory to be used to back the virtual memory map diff --git a/scripts/mod/Makefile b/scripts/mod/Makefile index c9e38ad937fd..f82ab128c086 100644 --- a/scripts/mod/Makefile +++ b/scripts/mod/Makefile @@ -1,6 +1,8 @@ # SPDX-License-Identifier: GPL-2.0 OBJECT_FILES_NON_STANDARD :=3D y CFLAGS_REMOVE_empty.o +=3D $(CC_FLAGS_LTO) +# See comments in Kbuild +KBUILD_CFLAGS_KERNEL +=3D -D__EXCLUDE_AUTOCONF_EXT_H =20 hostprogs-always-y +=3D modpost mk_elfconfig always-y +=3D empty.o --=20 2.11.0 From nobody Mon May 11 07:47:23 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65B58C433EF for ; Tue, 12 Apr 2022 12:13:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352245AbiDLMP2 (ORCPT ); Tue, 12 Apr 2022 08:15:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35586 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354462AbiDLMOH (ORCPT ); Tue, 12 Apr 2022 08:14:07 -0400 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 874FC8595F for ; Tue, 12 Apr 2022 04:15:11 -0700 (PDT) Received: by mail-pj1-x102e.google.com with SMTP id nt14-20020a17090b248e00b001ca601046a4so2550876pjb.0 for ; Tue, 12 Apr 2022 04:15:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0UA2E7Swg+iesG+ux9AwzGNddrfmiNsjUbQGq/dsMiM=; b=KyuopGe1m1WVOFP9p32hmnKBOcBz7duXJkJk9fS7PkoLF0O/RrKyn0sKqaHT7mwP2o Oemd+xORRsXBp6BQ77rF2ECHQKbcsydrcRVQF+xON0R1Tc3192LcDa9tNR+Yhg72Hcko WDNb0O+5PAynoZQW4wML+O/ykXnlIcgyVq39+gR98yIkZ48xzIJDvoZWzM+bvssO9tUm 82+Bu3gpNvE/FAoQFNukD7gOFZ1aOiNOmkU6gSOXM0qJyOIOfevULObXLuMq/Gcki9/P 1JI9GHBns+7OgvpyjHhLm8EgsSAg1TAA9L0/viQqSSKRt1/ETaLApAa7XkNdQSziZJcQ FSQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0UA2E7Swg+iesG+ux9AwzGNddrfmiNsjUbQGq/dsMiM=; b=ZAtDNVULWK2NGXR2a1wCzLOqRpt7pF90TWMj7aPYPtIzaX5gjR+EnxKVm3kENu6fuh ikLT1wiZQ50hPnD2dIMwoVTNJOqK2keY2rq8+eJIHjTcC0nH89jAkTBrGe14n2XQoJGK tcf26w9Dn1W3HCSGC9I8X/Si76TtHgnTTHaGsxSVTcQgBJ2Zkmb/pQKZ4qoNsqaIpzId rI/XQjpf9L2YXBZnqDXLtyKQqcPfc+K1u/FhmPTGR931Uuhbfsv3psKedw257iwlDUCc dPlt+veycrRvKiyN2s7rfw5/ZLIIgr+kVpvNesy8zgDi6iEXV/LB9K5P/0k+AXUclQvC MGqw== X-Gm-Message-State: AOAM531VQtSzenGpOAsR3RuExSp6HYV6SL4NZXBCb1pr0C6xsJWGiWVa sAQa6mJ8e7z0U4rc8pPtJGh96w== X-Google-Smtp-Source: ABdhPJw0PpvcK/RfW3RHG1GTMBjicwgJXBBNM/iVvZwsd6IE8JTVr0XCaF+IWKEuybeVVBGmHde9Gw== X-Received: by 2002:a17:902:c94d:b0:158:4e50:7a32 with SMTP id i13-20020a170902c94d00b001584e507a32mr12971286pla.163.1649762110989; Tue, 12 Apr 2022 04:15:10 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([139.177.225.229]) by smtp.gmail.com with ESMTPSA id l25-20020a635719000000b0039da6cdf82dsm402507pgb.83.2022.04.12.04.15.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Apr 2022 04:15:10 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, akpm@linux-foundation.org, mcgrof@kernel.org, keescook@chromium.org, yzaikin@google.com, osalvador@suse.de, david@redhat.com, masahiroy@kernel.org Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v7 2/4] mm: memory_hotplug: override memmap_on_memory when hugetlb_free_vmemmap=on Date: Tue, 12 Apr 2022 19:14:32 +0800 Message-Id: <20220412111434.96498-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220412111434.96498-1-songmuchun@bytedance.com> References: <20220412111434.96498-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When "hugetlb_free_vmemmap=3Don" and "memory_hotplug.memmap_on_memory" are both passed to boot cmdline, the variable of "memmap_on_memory" will be set to 1 even if the vmemmap pages will not be allocated from the hotadded memory since the former takes precedence over the latter. In the next patch, we want to enable or disable the feature of freeing vmemmap pages of HugeTLB via sysctl. We need a way to know if the feature of memory_hotplug.memmap_on_memory is enabled when enabling the feature of freeing vmemmap pages since those two features are not compatible, however, the variable of "memmap_on_memory" cannot indicate this nowadays. Do not set "memmap_on_memory" to 1 when both parameters are passed to cmdline, in this case, "memmap_on_memory" could indicate if this feature is enabled by the users. Also introduce mhp_memmap_on_memory() helper to move the definition of "memmap_on_memory" to the scope of CONFIG_MHP_MEMMAP_ON_MEMORY. In the next patch, mhp_memmap_on_memory() will also be exported to be used in hugetlb_vmemmap.c. Signed-off-by: Muchun Song --- mm/memory_hotplug.c | 32 ++++++++++++++++++++++++++------ 1 file changed, 26 insertions(+), 6 deletions(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 74430f88853d..f6eab03397d3 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -42,14 +42,36 @@ #include "internal.h" #include "shuffle.h" =20 +#ifdef CONFIG_MHP_MEMMAP_ON_MEMORY +static int memmap_on_memory_set(const char *val, const struct kernel_param= *kp) +{ + if (hugetlb_optimize_vmemmap_enabled()) + return 0; + return param_set_bool(val, kp); +} + +static const struct kernel_param_ops memmap_on_memory_ops =3D { + .flags =3D KERNEL_PARAM_OPS_FL_NOARG, + .set =3D memmap_on_memory_set, + .get =3D param_get_bool, +}; =20 /* * memory_hotplug.memmap_on_memory parameter */ static bool memmap_on_memory __ro_after_init; -#ifdef CONFIG_MHP_MEMMAP_ON_MEMORY -module_param(memmap_on_memory, bool, 0444); +module_param_cb(memmap_on_memory, &memmap_on_memory_ops, &memmap_on_memory= , 0444); MODULE_PARM_DESC(memmap_on_memory, "Enable memmap on memory for memory hot= plug"); + +static inline bool mhp_memmap_on_memory(void) +{ + return memmap_on_memory; +} +#else +static inline bool mhp_memmap_on_memory(void) +{ + return false; +} #endif =20 enum { @@ -1272,9 +1294,7 @@ bool mhp_supports_memmap_on_memory(unsigned long size) * altmap as an alternative source of memory, and we do not exactly * populate a single PMD. */ - return memmap_on_memory && - !hugetlb_optimize_vmemmap_enabled() && - IS_ENABLED(CONFIG_MHP_MEMMAP_ON_MEMORY) && + return mhp_memmap_on_memory() && size =3D=3D memory_block_size_bytes() && IS_ALIGNED(vmemmap_size, PMD_SIZE) && IS_ALIGNED(remaining_size, (pageblock_nr_pages << PAGE_SHIFT)); @@ -2081,7 +2101,7 @@ static int __ref try_remove_memory(u64 start, u64 siz= e) * We only support removing memory added with MHP_MEMMAP_ON_MEMORY in * the same granularity it was added - a single memory block. */ - if (memmap_on_memory) { + if (mhp_memmap_on_memory()) { nr_vmemmap_pages =3D walk_memory_blocks(start, size, NULL, get_nr_vmemmap_pages_cb); if (nr_vmemmap_pages) { --=20 2.11.0 From nobody Mon May 11 07:47:23 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1B02C433EF for ; Tue, 12 Apr 2022 12:13:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352917AbiDLMPy (ORCPT ); Tue, 12 Apr 2022 08:15:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46098 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354547AbiDLMOI (ORCPT ); Tue, 12 Apr 2022 08:14:08 -0400 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 893D285977 for ; Tue, 12 Apr 2022 04:15:17 -0700 (PDT) Received: by mail-pl1-x636.google.com with SMTP id s10so8969750plg.9 for ; Tue, 12 Apr 2022 04:15:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=EfqzshiMxPgqo91bqRrogKECfyK/4BP/Cl7KQelj6Dg=; b=bPeubF5u9edpfnmGs04xInTA5275jlp30BzDSTI+ieNDlG4u7/tJ7Rz5RTrhAeA0DD xb3XM6ti+GrjarPym0NKYWxAGrxIlYSSVfiGUr/dIS0YAfsAa/cWKAEtZ14ZadQw81uF LUS8d+iv9X73VtIOegsMfQm/be60nBbUDvuoouSr3+xA55Dq6i6d0JF+5DrQD+VsGKca gfiglXT5A5ClCVJrA0q6LCJhwh1zeAiWiZ6yptsEyZShruJBkFXplBrDQLtEX9sL3YMz /ydrd5vHe6tt3lhL/kJ/qf7IMVRHgzErt1mIGmZMgxdZtI2MWBEHsAu/Ypq3AsqyPbJ+ cBEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=EfqzshiMxPgqo91bqRrogKECfyK/4BP/Cl7KQelj6Dg=; b=2rj1KeR838NOYmpnbwQblSq+SLXt456cAG5h2OQfai5KdSMOd1Ce06ZnGKyITGH/9N waPa/Q+RurAEQqxsrF+WfZjZ0jiKpG5w/RRqVasTA3/f2oNBIo7Dj1PIQxbSb5DQwHhn rYe5yGpHk9gAM0xF2Ynta88an9J2Rw3iBolAs2wBCRbw2uVIt0OFlf56GO1Xp9k1uAN9 iKn8XgIzCuFTobFvelFxjcUL0vsO3LDnwU8ES2XYmQJuyhl6/sHXlYhHHR2pIp7a1xDV 8KxV+IZ31ighSrcBpn6ZeAmDArBy9NeJqS3dD3iMovqVvvUNhaAsvW6J7Qwo7gKjDiRj MUnw== X-Gm-Message-State: AOAM533joEeB2cVs0wD3OsB9Y/SiqHdB6fY1TtOw41XoQK7Y1b5IcWEl bfTo0YMnKRnwRBtNbFbX4OC9vA== X-Google-Smtp-Source: ABdhPJxzz1f3N9EC3y+nmsxylhXN2TisCoqArOTZmzLYhAtuo5R69JHBs9+2JAq88yjyDr3N5O/qwA== X-Received: by 2002:a17:90b:3b86:b0:1cb:c243:aed3 with SMTP id pc6-20020a17090b3b8600b001cbc243aed3mr3678637pjb.202.1649762116200; Tue, 12 Apr 2022 04:15:16 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([139.177.225.229]) by smtp.gmail.com with ESMTPSA id l25-20020a635719000000b0039da6cdf82dsm402507pgb.83.2022.04.12.04.15.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Apr 2022 04:15:15 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, akpm@linux-foundation.org, mcgrof@kernel.org, keescook@chromium.org, yzaikin@google.com, osalvador@suse.de, david@redhat.com, masahiroy@kernel.org Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v7 3/4] mm: hugetlb_vmemmap: use kstrtobool for hugetlb_vmemmap param parsing Date: Tue, 12 Apr 2022 19:14:33 +0800 Message-Id: <20220412111434.96498-4-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220412111434.96498-1-songmuchun@bytedance.com> References: <20220412111434.96498-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Use kstrtobool rather than open coding "on" and "off" parsing in mm/hugetlb_vmemmap.c, which is more powerful to handle all kinds of parameters like 'Yy1Nn0' or [oO][NnFf] for "on" and "off". Signed-off-by: Muchun Song --- Documentation/admin-guide/kernel-parameters.txt | 6 +++--- mm/hugetlb_vmemmap.c | 10 +++++----- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentatio= n/admin-guide/kernel-parameters.txt index f3cf9f21f6eb..6ea428023d51 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1669,10 +1669,10 @@ enabled. Allows heavy hugetlb users to free up some more memory (7 * PAGE_SIZE for each 2MB hugetlb page). - Format: { on | off (default) } + Format: { [oO][Nn]/Y/y/1 | [oO][Ff]/N/n/0 (default) } =20 - on: enable the feature - off: disable the feature + [oO][Nn]/Y/y/1: enable the feature + [oO][Ff]/N/n/0: disable the feature =20 Built with CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON=3Dy, the default is on. diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index be73782cc1cf..4b6a5cf16f11 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -195,15 +195,15 @@ EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key); =20 static int __init hugetlb_vmemmap_early_param(char *buf) { - if (!buf) + bool enable; + + if (kstrtobool(buf, &enable)) return -EINVAL; =20 - if (!strcmp(buf, "on")) + if (enable) static_branch_enable(&hugetlb_optimize_vmemmap_key); - else if (!strcmp(buf, "off")) - static_branch_disable(&hugetlb_optimize_vmemmap_key); else - return -EINVAL; + static_branch_disable(&hugetlb_optimize_vmemmap_key); =20 return 0; } --=20 2.11.0 From nobody Mon May 11 07:47:23 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6682C433FE for ; Tue, 12 Apr 2022 12:13:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349033AbiDLMQF (ORCPT ); Tue, 12 Apr 2022 08:16:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41106 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354617AbiDLMOK (ORCPT ); Tue, 12 Apr 2022 08:14:10 -0400 Received: from mail-pg1-x532.google.com (mail-pg1-x532.google.com [IPv6:2607:f8b0:4864:20::532]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C06CD85BC9 for ; Tue, 12 Apr 2022 04:15:22 -0700 (PDT) Received: by mail-pg1-x532.google.com with SMTP id s137so14228058pgs.5 for ; Tue, 12 Apr 2022 04:15:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=tOkU/RARB4HzUE+K871kdn+JSt0MwPZIDE1q0TKS/DQ=; b=OKnLJ4QCbkZSQSjqhcW5b0CZeXAOfSeBDKKAKfyreFoxa6pTgaQWKP0BesaFcmYzgF I/UYTZWev+8+PIKP2qq2wamq0V90RFdhK0z1JnJFHDgPJradlTJ7uBV1C1jV4EB/ht2m dKhhpsJ5c7o10KdScpnuFUpfFbGEa1sHsBQumLrPcVpJMirCCADQu6N/wFYxQYtNJRyq Jm5SEyUC7WD93CubrUlN/yY+ukPhZ12oig0i20MKS5RDTacSHxVldY+zULCDcqdmMXZ5 wzAWxYt77fWGOyd0gg3SXBmQLJzpsuZPrPoHo40qFZqB7X/3gT+gVrqsop8m3CrRNwwb 0zag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=tOkU/RARB4HzUE+K871kdn+JSt0MwPZIDE1q0TKS/DQ=; b=rEKYGzr/9uVX25J8QtOtvcaPW7R0Cew5JOHANi7RqNI5kKwg4GC204kEQwIB2bT/8U yv7wikdIMvdjSQPy38q6aR9T+6Om/OliazYUiNkLepEikb6BFweKZI3QCimmvsFDrvMx UELKTdeFY3dreVi5NMCwPl9n24HfQ4+Am39JtThgYa4fNV5t6NIx7CrUlAJ+yLonozOg 0vn0wiMKqCOa8bN2oK2qTmQsU48mzcxbWsMLZYad/8GJgW/41E6Yn5E3vRxFsV39Nyll u5VRkiVpH27rB8UvZ84kReAuUCNazJtl+tioS7CW56cCJGazhtzvA7Lu9yYmmlVEHZ7i HvfQ== X-Gm-Message-State: AOAM531QZlP8zb9As3YYBVhZcgTobiJHZKudVY1wDJm3UEGYPKgA2DTE IVKsXGDSOWvs+uR7BEmtdiqlAw== X-Google-Smtp-Source: ABdhPJwTtrUS2x+SQP3CaQv+UzJaK+MYHoLSMznZiPSDVhpNPWgS+Ftk27GJKqldQa/Z8kINzEsRCQ== X-Received: by 2002:a63:195f:0:b0:399:1f0e:a21d with SMTP id 31-20020a63195f000000b003991f0ea21dmr29964914pgz.393.1649762121328; Tue, 12 Apr 2022 04:15:21 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([139.177.225.229]) by smtp.gmail.com with ESMTPSA id l25-20020a635719000000b0039da6cdf82dsm402507pgb.83.2022.04.12.04.15.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Apr 2022 04:15:21 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, akpm@linux-foundation.org, mcgrof@kernel.org, keescook@chromium.org, yzaikin@google.com, osalvador@suse.de, david@redhat.com, masahiroy@kernel.org Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v7 4/4] mm: hugetlb_vmemmap: add hugetlb_optimize_vmemmap sysctl Date: Tue, 12 Apr 2022 19:14:34 +0800 Message-Id: <20220412111434.96498-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220412111434.96498-1-songmuchun@bytedance.com> References: <20220412111434.96498-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" We must add hugetlb_free_vmemmap=3Don (or "off") to the boot cmdline and reboot the server to enable or disable the feature of optimizing vmemmap pages associated with HugeTLB pages. However, rebooting usually takes a long time. So add a sysctl to enable or disable the feature at runtime without rebooting. Once enabled, the vmemmap pages of subsequent allocation of HugeTLB pages from buddy allocator will be optimized (7 pages per 2MB HugeTLB page and 4095 pages per 1GB HugeTLB page), whereas already allocated HugeTLB pages will not be optimized. When those optimized HugeTLB pages are freed from the HugeTLB pool to the buddy allocator, the vmemmap pages representing that range needs to be remapped again and the vmemmap pages discarded earlier need to be rellocated again. If your use case is that HugeTLB pages are allocated 'on the fly' instead of being pulled from the HugeTLB pool, you should weigh the benefits of memory savings against the more overhead of allocation or freeing HugeTLB pages between the HugeTLB pool and the buddy allocator. Another behavior to note is that if the system is under heavy memory pressure, it could prevent the user from freeing HugeTLB pages from the HugeTLB pool to the buddy allocator since the allocation of vmemmap pages could be failed, you have to retry later if your system encounter this situation. Once disabled, the vmemmap pages of subsequent allocation of HugeTLB pages from buddy allocator will not be optimized, whereas already optimized HugeTLB pages will not be affected. If you want to make sure there is no optimized HugeTLB pages, you can set "nr_hugepages" to 0 first and then disable this. Signed-off-by: Muchun Song --- Documentation/admin-guide/sysctl/vm.rst | 27 ++++++++++ include/linux/memory_hotplug.h | 9 ++++ mm/hugetlb_vmemmap.c | 90 +++++++++++++++++++++++++++++= ---- mm/hugetlb_vmemmap.h | 4 +- mm/memory_hotplug.c | 7 +-- 5 files changed, 121 insertions(+), 16 deletions(-) diff --git a/Documentation/admin-guide/sysctl/vm.rst b/Documentation/admin-= guide/sysctl/vm.rst index 747e325ebcd0..5c804ada85c5 100644 --- a/Documentation/admin-guide/sysctl/vm.rst +++ b/Documentation/admin-guide/sysctl/vm.rst @@ -562,6 +562,33 @@ Change the minimum size of the hugepage pool. See Documentation/admin-guide/mm/hugetlbpage.rst =20 =20 +hugetlb_optimize_vmemmap +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +Enable (set to 1) or disable (set to 0) the feature of optimizing vmemmap = pages +associated with each HugeTLB page. + +Once enabled, the vmemmap pages of subsequent allocation of HugeTLB pages = from +buddy allocator will be optimized (7 pages per 2MB HugeTLB page and 4095 p= ages +per 1GB HugeTLB page), whereas already allocated HugeTLB pages will not be +optimized. When those optimized HugeTLB pages are freed from the HugeTLB = pool +to the buddy allocator, the vmemmap pages representing that range needs to= be +remapped again and the vmemmap pages discarded earlier need to be rellocat= ed +again. If your use case is that HugeTLB pages are allocated 'on the fly' +instead of being pulled from the HugeTLB pool, you should weigh the benefi= ts of +memory savings against the more overhead of allocation or freeing HugeTLB = pages +between the HugeTLB pool and the buddy allocator. Another behavior to not= e is +that if the system is under heavy memory pressure, it could prevent the us= er +from freeing HugeTLB pages from the HugeTLB pool to the buddy allocator si= nce +the allocation of vmemmap pages could be failed, you have to retry later if +your system encounter this situation. + +Once disabled, the vmemmap pages of subsequent allocation of HugeTLB pages= from +buddy allocator will not be optimized, whereas already optimized HugeTLB p= ages +will not be affected. If you want to make sure there is no optimized Huge= TLB +pages, you can set "nr_hugepages" to 0 first and then disable this. + + nr_hugepages_mempolicy =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =20 diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index 7ab15d6fb227..93f2cbea0f9b 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -348,4 +348,13 @@ void arch_remove_linear_mapping(u64 start, u64 size); extern bool mhp_supports_memmap_on_memory(unsigned long size); #endif /* CONFIG_MEMORY_HOTPLUG */ =20 +#ifdef CONFIG_MHP_MEMMAP_ON_MEMORY +bool mhp_memmap_on_memory(void); +#else +static inline bool mhp_memmap_on_memory(void) +{ + return false; +} +#endif + #endif /* __LINUX_MEMORY_HOTPLUG_H */ diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 4b6a5cf16f11..94571d63916d 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -176,6 +176,7 @@ */ #define pr_fmt(fmt) "HugeTLB: " fmt =20 +#include #include "hugetlb_vmemmap.h" =20 #ifdef CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP @@ -189,21 +190,40 @@ #define RESERVE_VMEMMAP_NR 1U #define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) =20 +enum vmemmap_optimize_mode { + VMEMMAP_OPTIMIZE_OFF, + VMEMMAP_OPTIMIZE_ON, +}; + DEFINE_STATIC_KEY_MAYBE(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON, hugetlb_optimize_vmemmap_key); EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key); =20 +static enum vmemmap_optimize_mode vmemmap_optimize_mode =3D + IS_ENABLED(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP_DEFAULT_ON); + +static void vmemmap_optimize_mode_switch(enum vmemmap_optimize_mode to) +{ + if (vmemmap_optimize_mode =3D=3D to) + return; + + if (to =3D=3D VMEMMAP_OPTIMIZE_OFF) + static_branch_dec(&hugetlb_optimize_vmemmap_key); + else + static_branch_inc(&hugetlb_optimize_vmemmap_key); + vmemmap_optimize_mode =3D to; +} + static int __init hugetlb_vmemmap_early_param(char *buf) { bool enable; + enum vmemmap_optimize_mode mode; =20 if (kstrtobool(buf, &enable)) return -EINVAL; =20 - if (enable) - static_branch_enable(&hugetlb_optimize_vmemmap_key); - else - static_branch_disable(&hugetlb_optimize_vmemmap_key); + mode =3D enable ? VMEMMAP_OPTIMIZE_ON : VMEMMAP_OPTIMIZE_OFF; + vmemmap_optimize_mode_switch(mode); =20 return 0; } @@ -236,8 +256,10 @@ int hugetlb_vmemmap_alloc(struct hstate *h, struct pag= e *head) */ ret =3D vmemmap_remap_alloc(vmemmap_addr, vmemmap_end, vmemmap_reuse, GFP_KERNEL | __GFP_NORETRY | __GFP_THISNODE); - if (!ret) + if (!ret) { ClearHPageVmemmapOptimized(head); + static_branch_dec(&hugetlb_optimize_vmemmap_key); + } =20 return ret; } @@ -251,6 +273,8 @@ void hugetlb_vmemmap_free(struct hstate *h, struct page= *head) if (!vmemmap_pages) return; =20 + static_branch_inc(&hugetlb_optimize_vmemmap_key); + vmemmap_addr +=3D RESERVE_VMEMMAP_SIZE; vmemmap_end =3D vmemmap_addr + (vmemmap_pages << PAGE_SHIFT); vmemmap_reuse =3D vmemmap_addr - PAGE_SIZE; @@ -260,7 +284,9 @@ void hugetlb_vmemmap_free(struct hstate *h, struct page= *head) * to the page which @vmemmap_reuse is mapped to, then free the pages * which the range [@vmemmap_addr, @vmemmap_end] is mapped to. */ - if (!vmemmap_remap_free(vmemmap_addr, vmemmap_end, vmemmap_reuse)) + if (vmemmap_remap_free(vmemmap_addr, vmemmap_end, vmemmap_reuse)) + static_branch_dec(&hugetlb_optimize_vmemmap_key); + else SetHPageVmemmapOptimized(head); } =20 @@ -277,9 +303,6 @@ void __init hugetlb_vmemmap_init(struct hstate *h) BUILD_BUG_ON(__NR_USED_SUBPAGE >=3D RESERVE_VMEMMAP_SIZE / sizeof(struct page)); =20 - if (!hugetlb_optimize_vmemmap_enabled()) - return; - vmemmap_pages =3D (nr_pages * sizeof(struct page)) >> PAGE_SHIFT; /* * The head page is not to be freed to buddy allocator, the other tail @@ -295,4 +318,53 @@ void __init hugetlb_vmemmap_init(struct hstate *h) pr_info("can optimize %d vmemmap pages for %s\n", h->optimize_vmemmap_pages, h->name); } + +#ifdef CONFIG_PROC_SYSCTL +static int hugetlb_optimize_vmemmap_handler(struct ctl_table *table, int w= rite, + void *buffer, size_t *length, + loff_t *ppos) +{ + int ret; + enum vmemmap_optimize_mode mode; + static DEFINE_MUTEX(sysctl_mutex); + + if (write && !capable(CAP_SYS_ADMIN)) + return -EPERM; + + mutex_lock(&sysctl_mutex); + mode =3D vmemmap_optimize_mode; + table->data =3D &mode; + ret =3D proc_dointvec_minmax(table, write, buffer, length, ppos); + if (write && !ret) + vmemmap_optimize_mode_switch(mode); + mutex_unlock(&sysctl_mutex); + + return ret; +} + +static struct ctl_table hugetlb_vmemmap_sysctls[] =3D { + { + .procname =3D "hugetlb_optimize_vmemmap", + .maxlen =3D sizeof(enum vmemmap_optimize_mode), + .mode =3D 0644, + .proc_handler =3D hugetlb_optimize_vmemmap_handler, + .extra1 =3D SYSCTL_ZERO, + .extra2 =3D SYSCTL_ONE, + }, + { } +}; + +static __init int hugetlb_vmemmap_sysctls_init(void) +{ + /* + * The vmemmap pages cannot be optimized if + * "memory_hotplug.memmap_on_memory" is enabled. + */ + if (!mhp_memmap_on_memory()) + register_sysctl_init("vm", hugetlb_vmemmap_sysctls); + + return 0; +} +late_initcall(hugetlb_vmemmap_sysctls_init); +#endif /* CONFIG_PROC_SYSCTL */ #endif /* CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP */ diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 3afae3ff37fa..63ae2766ffe0 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -21,7 +21,9 @@ void hugetlb_vmemmap_init(struct hstate *h); */ static inline unsigned int hugetlb_optimize_vmemmap_pages(struct hstate *h) { - return h->optimize_vmemmap_pages; + if (hugetlb_optimize_vmemmap_enabled()) + return h->optimize_vmemmap_pages; + return 0; } #else static inline int hugetlb_vmemmap_alloc(struct hstate *h, struct page *hea= d) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index f6eab03397d3..af844a01e956 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -63,15 +63,10 @@ static bool memmap_on_memory __ro_after_init; module_param_cb(memmap_on_memory, &memmap_on_memory_ops, &memmap_on_memory= , 0444); MODULE_PARM_DESC(memmap_on_memory, "Enable memmap on memory for memory hot= plug"); =20 -static inline bool mhp_memmap_on_memory(void) +bool mhp_memmap_on_memory(void) { return memmap_on_memory; } -#else -static inline bool mhp_memmap_on_memory(void) -{ - return false; -} #endif =20 enum { --=20 2.11.0