From nobody Sun Apr 26 09:36:50 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73AB1C43334 for ; Mon, 20 Jun 2022 11:07:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241253AbiFTLHe (ORCPT ); Mon, 20 Jun 2022 07:07:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241236AbiFTLHY (ORCPT ); Mon, 20 Jun 2022 07:07:24 -0400 Received: from mail-pl1-x62e.google.com (mail-pl1-x62e.google.com [IPv6:2607:f8b0:4864:20::62e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E8B5815807 for ; Mon, 20 Jun 2022 04:07:22 -0700 (PDT) Received: by mail-pl1-x62e.google.com with SMTP id l6so1261221plg.11 for ; Mon, 20 Jun 2022 04:07:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=mM7s3v4b97sYa4DNCsb4b4pY8OOrPntnEzMKTxkrZCM=; b=gIu6W0ZIr/zYXLZR0jiYfFo6E18oWdKoXjARFWXS+zZf7fY/UQtXQsa0q11G5ojxlh QPaT79UFh/2Be873OwRYKLBoDMkwtiApXCRGrvc9Qu1E40kH4y9DOZ1Wlkt3mbSVY4f4 AJZeBzIYxFP2CPQ3ucPs8G377XblLDiz1tZ2bV9v2/CMkAcEY7wGx5AN82RVvaoz18nZ mv9nljhR8AXVEmfjx2gbRaPS3SF3IoVSwH3UwVA0lPUBiiiP8kjxlkY5iFcPdlv8Pf9h I0oGIkmib6iaVtMoK9CVeiRgrZnLV2u5xwvz72yAlNeUQAdANv09B3o3ZkNysai1sQxM Bmhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mM7s3v4b97sYa4DNCsb4b4pY8OOrPntnEzMKTxkrZCM=; b=xgWw/f85jO+faVRsNn+rHnnOmZu7+/BPhcKS62h0sEl2ASNUPvVvUJ5AqHNHIeJR3q fNbiLsosRWzle3H1I0NwakIVdbGEFueyrmkJz/1d/EEobNL0J+mbWKa2zISQvp1aQ04U HUh3o3V0g9uHFssp6kfWdkkWRYXYVQ0gZLGNZiCQkcc9gfcT0gMXBjdtIQddgxZU7dbZ +f25+BtJrRZjLvGjE70o0oIvFCbhIHqo9z87gLjmL4vT3Eg7uTmycmuoQEjf08A3jVZC PKXazOa1OxTAvejofg4Iaj+EYc8777oSMM2zLOIroTbiXJo03ARm1yEUjJwBxOcBILiH Vu3g== X-Gm-Message-State: AJIora8J7lWZMw+YaqYJrcSOACkNQAxKnjQMwNqJxYad6RZzkpX/MY+y dr/PAPncXuTS4aTJyA5TSAh60w== X-Google-Smtp-Source: AGRyM1uFiRFO3G72OlGQJs4sGnpULsAK3KSRtJN9Fgt8rFs5zX6qpNiEZwdJU5OUBW4nsQpQGLcK2A== X-Received: by 2002:a17:902:7042:b0:16a:2156:c325 with SMTP id h2-20020a170902704200b0016a2156c325mr5882728plt.166.1655723242390; Mon, 20 Jun 2022 04:07:22 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([139.177.225.255]) by smtp.gmail.com with ESMTPSA id iw10-20020a170903044a00b0016a214e4afasm2385962plb.125.2022.06.20.04.07.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Jun 2022 04:07:22 -0700 (PDT) From: Muchun Song To: akpm@linux-foundation.org, corbet@lwn.net, david@redhat.com, mike.kravetz@oracle.com, osalvador@suse.de, paulmck@kernel.org Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v5 1/2] mm: memory_hotplug: enumerate all supported section flags Date: Mon, 20 Jun 2022 19:06:15 +0800 Message-Id: <20220620110616.12056-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220620110616.12056-1-songmuchun@bytedance.com> References: <20220620110616.12056-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" We are almost running out of section flags, only one bit is available in the worst case (powerpc with 256k pages). However, there are still some free bits (in ->section_mem_map) on other architectures (e.g. x86_64 has 10 bits available, arm64 has 8 bits available with worst case of 64K pages). We have hard coded those numbers in code, it is inconvenient to use those bits on other architectures except powerpc. So transfer those section flags to enumeration to make it easy to add new section flags in the future. Also, move SECTION_TAINT_ZONE_DEVICE into the scope of CONFIG_ZONE_DEVICE to save a bit on non-zone-device case. Signed-off-by: Muchun Song Reviewed-by: David Hildenbrand --- include/linux/mmzone.h | 41 ++++++++++++++++++++++++++++++++--------- mm/memory_hotplug.c | 6 ++++++ mm/sparse.c | 2 +- 3 files changed, 39 insertions(+), 10 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index aab70355d64f..2b5757752333 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -1418,16 +1418,32 @@ extern size_t mem_section_usage_size(void); * (equal SECTION_SIZE_BITS - PAGE_SHIFT), and the * worst combination is powerpc with 256k pages, * which results in PFN_SECTION_SHIFT equal 6. - * To sum it up, at least 6 bits are available. + * To sum it up, at least 6 bits are available on all architectures. + * However, we can exceed 6 bits on some other architectures except + * powerpc (e.g. 15 bits are available on x86_64, 13 bits are available + * with the worst case of 64K pages on arm64) if we make sure the + * exceeded bit is not applicable to powerpc. */ -#define SECTION_MARKED_PRESENT (1UL<<0) -#define SECTION_HAS_MEM_MAP (1UL<<1) -#define SECTION_IS_ONLINE (1UL<<2) -#define SECTION_IS_EARLY (1UL<<3) -#define SECTION_TAINT_ZONE_DEVICE (1UL<<4) -#define SECTION_MAP_LAST_BIT (1UL<<5) -#define SECTION_MAP_MASK (~(SECTION_MAP_LAST_BIT-1)) -#define SECTION_NID_SHIFT 6 +enum { + SECTION_MARKED_PRESENT_BIT, + SECTION_HAS_MEM_MAP_BIT, + SECTION_IS_ONLINE_BIT, + SECTION_IS_EARLY_BIT, +#ifdef CONFIG_ZONE_DEVICE + SECTION_TAINT_ZONE_DEVICE_BIT, +#endif + SECTION_MAP_LAST_BIT, +}; + +#define SECTION_MARKED_PRESENT BIT(SECTION_MARKED_PRESENT_BIT) +#define SECTION_HAS_MEM_MAP BIT(SECTION_HAS_MEM_MAP_BIT) +#define SECTION_IS_ONLINE BIT(SECTION_IS_ONLINE_BIT) +#define SECTION_IS_EARLY BIT(SECTION_IS_EARLY_BIT) +#ifdef CONFIG_ZONE_DEVICE +#define SECTION_TAINT_ZONE_DEVICE BIT(SECTION_TAINT_ZONE_DEVICE_BIT) +#endif +#define SECTION_MAP_MASK (~(BIT(SECTION_MAP_LAST_BIT) - 1)) +#define SECTION_NID_SHIFT SECTION_MAP_LAST_BIT =20 static inline struct page *__section_mem_map_addr(struct mem_section *sect= ion) { @@ -1466,12 +1482,19 @@ static inline int online_section(struct mem_section= *section) return (section && (section->section_mem_map & SECTION_IS_ONLINE)); } =20 +#ifdef CONFIG_ZONE_DEVICE static inline int online_device_section(struct mem_section *section) { unsigned long flags =3D SECTION_IS_ONLINE | SECTION_TAINT_ZONE_DEVICE; =20 return section && ((section->section_mem_map & flags) =3D=3D flags); } +#else +static inline int online_device_section(struct mem_section *section) +{ + return 0; +} +#endif =20 static inline int online_section_nr(unsigned long nr) { diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 1f1a730c4499..6662b86e9e64 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -670,12 +670,18 @@ static void __meminit resize_pgdat_range(struct pglis= t_data *pgdat, unsigned lon =20 } =20 +#ifdef CONFIG_ZONE_DEVICE static void section_taint_zone_device(unsigned long pfn) { struct mem_section *ms =3D __pfn_to_section(pfn); =20 ms->section_mem_map |=3D SECTION_TAINT_ZONE_DEVICE; } +#else +static inline void section_taint_zone_device(unsigned long pfn) +{ +} +#endif =20 /* * Associate the pfn range with the given zone, initializing the memmaps diff --git a/mm/sparse.c b/mm/sparse.c index cb3bfae64036..e5a8a3a0edd7 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -281,7 +281,7 @@ static unsigned long sparse_encode_mem_map(struct page = *mem_map, unsigned long p { unsigned long coded_mem_map =3D (unsigned long)(mem_map - (section_nr_to_pfn(pnum))); - BUILD_BUG_ON(SECTION_MAP_LAST_BIT > (1UL< PFN_SECTION_SHIFT); BUG_ON(coded_mem_map & ~SECTION_MAP_MASK); return coded_mem_map; } --=20 2.11.0 From nobody Sun Apr 26 09:36:50 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 890CAC433EF for ; Mon, 20 Jun 2022 11:07:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241285AbiFTLHi (ORCPT ); Mon, 20 Jun 2022 07:07:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56616 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241248AbiFTLH3 (ORCPT ); Mon, 20 Jun 2022 07:07:29 -0400 Received: from mail-pg1-x529.google.com (mail-pg1-x529.google.com [IPv6:2607:f8b0:4864:20::529]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F9F215730 for ; Mon, 20 Jun 2022 04:07:28 -0700 (PDT) Received: by mail-pg1-x529.google.com with SMTP id r66so4050715pgr.2 for ; Mon, 20 Jun 2022 04:07:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=PxJJWDMDRJ/V/J/ulLlbT6AgINdDK754dun2Uv1AOgg=; b=PW19D8SVO6QhENw3JVhyts0Te0k8ojpGqOn8mKPL3HS83cBv49L4iil947ieW6KN1r +gZfQTWs1TO+jdiBN5ZsHdcpod44pXk8rY3Lwyvhv+ilCUDPHoFLbk3+lYXp9P9gBVC9 5zmBfaXf0AzIUm6PAuwnNitDwq5RcMoQ+9Bn2QxAHsbE9uCKQnLjJxbLgPV1PA3cfF4d 7VBbfcxUkmFa8+0ySDYrFo375WlZU4e2xHLJ1rggEPyeHnKSU7K6GOzt54l5Hc1k5XCf CEucGRCG5Tw4gx9iwOn2otnzt4h8em2ie2SoBqohw9bH7PORfzRIn+NNt2xSqIhadiT1 jpMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PxJJWDMDRJ/V/J/ulLlbT6AgINdDK754dun2Uv1AOgg=; b=6aHHugw18xrtpcNFMq4W9iWHZ2ohg2lAlU592p+ravQrubPt1eiQ368DVHZnJYSMZ0 Z1/1n0OWztknAcuwO13/gOLdxYN7+oV2P97fJOa2FCiI/SppWP4rkVGuIUOwSE7PQVrZ 0JjHmyLS5Q9VbelvvlzD/30rtXEpEBNuKwMhSIrh6znqKOvOwDwtRf8YrzSDlyE424TU Iq42QibhoUKVugb7IyJcCPkEjEtNBn543Ci5M5vCd4k/isk1Kvs8XkKoqOEr5tQ8AX7P 2icU9v93ohO9SuG/equx9ufhPZYmrqL3nVv8Gu/EJWkAn0XSnL4KQwUjE8Pv/v7MexQN FtCA== X-Gm-Message-State: AJIora8veCTyJsl25S6qM9O+IVBnZr5JmUG5hQos68XGlzktjcXRXbOn zOW7WwGa9cNOyuVjPHRvBwjLLQ== X-Google-Smtp-Source: AGRyM1tCstRyg8UXzuaobRAsYvJVWreh4n+PaNIU7Gw8NlY+EetwGRq8AQ4deuoVumyseTykKVrMXg== X-Received: by 2002:a63:6ecb:0:b0:40c:95e7:b9e1 with SMTP id j194-20020a636ecb000000b0040c95e7b9e1mr8468506pgc.57.1655723247910; Mon, 20 Jun 2022 04:07:27 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([139.177.225.255]) by smtp.gmail.com with ESMTPSA id iw10-20020a170903044a00b0016a214e4afasm2385962plb.125.2022.06.20.04.07.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Jun 2022 04:07:27 -0700 (PDT) From: Muchun Song To: akpm@linux-foundation.org, corbet@lwn.net, david@redhat.com, mike.kravetz@oracle.com, osalvador@suse.de, paulmck@kernel.org Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v5 2/2] mm: memory_hotplug: make hugetlb_optimize_vmemmap compatible with memmap_on_memory Date: Mon, 20 Jun 2022 19:06:16 +0800 Message-Id: <20220620110616.12056-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220620110616.12056-1-songmuchun@bytedance.com> References: <20220620110616.12056-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" For now, the feature of hugetlb_free_vmemmap is not compatible with the feature of memory_hotplug.memmap_on_memory, and hugetlb_free_vmemmap takes precedence over memory_hotplug.memmap_on_memory. However, someone wants to make memory_hotplug.memmap_on_memory takes precedence over hugetlb_free_vmemmap since memmap_on_memory makes it more likely to succeed memory hotplug in close-to-OOM situations. So the decision of making hugetlb_free_vmemmap take precedence is not wise and elegant. The proper approach is to have hugetlb_vmemmap.c do the check whether the section which the HugeTLB pages belong to can be optimized. If the section's vmemmap pages are allocated from the added memory block itself, hugetlb_free_vmemmap should refuse to optimize the vmemmap, otherwise, do the optimization. Then both kernel parameters are compatible. So this patch introduces VmemmapSelfHosted to mask any non-optimizable vmemmap pages. The hugetlb_vmemmap can use this flag to detect if a vmemmap page can be optimized. Signed-off-by: Muchun Song Co-developed-by: Oscar Salvador Signed-off-by: Oscar Salvador Acked-by: David Hildenbrand --- Documentation/admin-guide/kernel-parameters.txt | 22 ++++----- Documentation/admin-guide/sysctl/vm.rst | 5 +- include/linux/memory_hotplug.h | 9 ---- include/linux/page-flags.h | 11 +++++ mm/hugetlb_vmemmap.c | 66 +++++++++++++++++++++= +--- mm/memory_hotplug.c | 27 +++++----- 6 files changed, 93 insertions(+), 47 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentatio= n/admin-guide/kernel-parameters.txt index 8090130b544b..d740e2ed0e61 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1722,9 +1722,11 @@ Built with CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON=3Dy, the default is on. =20 - This is not compatible with memory_hotplug.memmap_on_memory. - If both parameters are enabled, hugetlb_free_vmemmap takes - precedence over memory_hotplug.memmap_on_memory. + Note that the vmemmap pages may be allocated from the added + memory block itself when memory_hotplug.memmap_on_memory is + enabled, those vmemmap pages cannot be optimized even if this + feature is enabled. Other vmemmap pages not allocated from + the added memory block itself do not be affected. =20 hung_task_panic=3D [KNL] Should the hung task detector generate panics. @@ -3069,10 +3071,12 @@ [KNL,X86,ARM] Boolean flag to enable this feature. Format: {on | off (default)} When enabled, runtime hotplugged memory will - allocate its internal metadata (struct pages) - from the hotadded memory which will allow to - hotadd a lot of memory without requiring - additional memory to do so. + allocate its internal metadata (struct pages, + those vmemmap pages cannot be optimized even + if hugetlb_free_vmemmap is enabled) from the + hotadded memory which will allow to hotadd a + lot of memory without requiring additional + memory to do so. This feature is disabled by default because it has some implication on large (e.g. GB) allocations in some configurations (e.g. small @@ -3082,10 +3086,6 @@ Note that even when enabled, there are a few cases where the feature is not effective. =20 - This is not compatible with hugetlb_free_vmemmap. If - both parameters are enabled, hugetlb_free_vmemmap takes - precedence over memory_hotplug.memmap_on_memory. - memtest=3D [KNL,X86,ARM,M68K,PPC,RISCV] Enable memtest Format: default : 0 diff --git a/Documentation/admin-guide/sysctl/vm.rst b/Documentation/admin-= guide/sysctl/vm.rst index 5c9aa171a0d3..d7374a1e8ac9 100644 --- a/Documentation/admin-guide/sysctl/vm.rst +++ b/Documentation/admin-guide/sysctl/vm.rst @@ -565,9 +565,8 @@ See Documentation/admin-guide/mm/hugetlbpage.rst hugetlb_optimize_vmemmap =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =20 -This knob is not available when memory_hotplug.memmap_on_memory (kernel pa= rameter) -is configured or the size of 'struct page' (a structure defined in -include/linux/mm_types.h) is not power of two (an unusual system config co= uld +This knob is not available when the size of 'struct page' (a structure def= ined +in include/linux/mm_types.h) is not power of two (an unusual system config= could result in this). =20 Enable (set to 1) or disable (set to 0) the feature of optimizing vmemmap = pages diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index 20d7edf62a6a..e0b2209ab71c 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -351,13 +351,4 @@ void arch_remove_linear_mapping(u64 start, u64 size); extern bool mhp_supports_memmap_on_memory(unsigned long size); #endif /* CONFIG_MEMORY_HOTPLUG */ =20 -#ifdef CONFIG_MHP_MEMMAP_ON_MEMORY -bool mhp_memmap_on_memory(void); -#else -static inline bool mhp_memmap_on_memory(void) -{ - return false; -} -#endif - #endif /* __LINUX_MEMORY_HOTPLUG_H */ diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index e66f7aa3191d..2aa5dcbfe468 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -193,6 +193,11 @@ enum pageflags { =20 /* Only valid for buddy pages. Used to track pages that are reported */ PG_reported =3D PG_uptodate, + +#ifdef CONFIG_MEMORY_HOTPLUG + /* For self-hosted memmap pages */ + PG_vmemmap_self_hosted =3D PG_owner_priv_1, +#endif }; =20 #define PAGEFLAGS_MASK ((1UL << NR_PAGEFLAGS) - 1) @@ -628,6 +633,12 @@ PAGEFLAG_FALSE(SkipKASanPoison, skip_kasan_poison) */ __PAGEFLAG(Reported, reported, PF_NO_COMPOUND) =20 +#ifdef CONFIG_MEMORY_HOTPLUG +PAGEFLAG(VmemmapSelfHosted, vmemmap_self_hosted, PF_ANY) +#else +PAGEFLAG_FALSE(VmemmapSelfHosted, vmemmap_self_hosted) +#endif + /* * On an anonymous page mapped into a user virtual memory area, * page->mapping points to its anon_vma, not to a struct address_space; diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 1089ea8a9c98..6d9801bb3fec 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -10,7 +10,7 @@ */ #define pr_fmt(fmt) "HugeTLB: " fmt =20 -#include +#include #include "hugetlb_vmemmap.h" =20 /* @@ -97,18 +97,68 @@ int hugetlb_vmemmap_alloc(struct hstate *h, struct page= *head) return ret; } =20 +static unsigned int vmemmap_optimizable_pages(struct hstate *h, + struct page *head) +{ + if (READ_ONCE(vmemmap_optimize_mode) =3D=3D VMEMMAP_OPTIMIZE_OFF) + return 0; + + if (IS_ENABLED(CONFIG_MEMORY_HOTPLUG)) { + pmd_t *pmdp, pmd; + struct page *vmemmap_page; + unsigned long vaddr =3D (unsigned long)head; + + /* + * Only the vmemmap page's vmemmap page can be self-hosted. + * Walking the page tables to find the backing page of the + * vmemmap page. + */ + pmdp =3D pmd_off_k(vaddr); + /* + * The READ_ONCE() is used to stabilize *pmdp in a register or + * on the stack so that it will stop changing under the code. + * The only concurrent operation where it can be changed is + * split_vmemmap_huge_pmd() (*pmdp will be stable after this + * operation). + */ + pmd =3D READ_ONCE(*pmdp); + if (pmd_leaf(pmd)) + vmemmap_page =3D pmd_page(pmd) + pte_index(vaddr); + else + vmemmap_page =3D pte_page(*pte_offset_kernel(pmdp, vaddr)); + /* + * Due to HugeTLB alignment requirements and the vmemmap pages + * being at the start of the hotplugged memory region in + * memory_hotplug.memmap_on_memory case. Checking any vmemmap + * page's vmemmap page if it is marked as VmemmapSelfHosted is + * sufficient. + * + * [ hotplugged memory ] + * [ section ][...][ section ] + * [ vmemmap ][ usable memory ] + * ^ | | | + * +---+ | | + * ^ | | + * +-------+ | + * ^ | + * +-------------------------------------------+ + */ + if (PageVmemmapSelfHosted(vmemmap_page)) + return 0; + } + + return hugetlb_optimize_vmemmap_pages(h); +} + void hugetlb_vmemmap_free(struct hstate *h, struct page *head) { unsigned long vmemmap_addr =3D (unsigned long)head; unsigned long vmemmap_end, vmemmap_reuse, vmemmap_pages; =20 - vmemmap_pages =3D hugetlb_optimize_vmemmap_pages(h); + vmemmap_pages =3D vmemmap_optimizable_pages(h, head); if (!vmemmap_pages) return; =20 - if (READ_ONCE(vmemmap_optimize_mode) =3D=3D VMEMMAP_OPTIMIZE_OFF) - return; - static_branch_inc(&hugetlb_optimize_vmemmap_key); =20 vmemmap_addr +=3D RESERVE_VMEMMAP_SIZE; @@ -199,10 +249,10 @@ static struct ctl_table hugetlb_vmemmap_sysctls[] =3D= { static __init int hugetlb_vmemmap_sysctls_init(void) { /* - * If "memory_hotplug.memmap_on_memory" is enabled or "struct page" - * crosses page boundaries, the vmemmap pages cannot be optimized. + * If "struct page" crosses page boundaries, the vmemmap pages cannot + * be optimized. */ - if (!mhp_memmap_on_memory() && is_power_of_2(sizeof(struct page))) + if (is_power_of_2(sizeof(struct page))) register_sysctl_init("vm", hugetlb_vmemmap_sysctls); =20 return 0; diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 6662b86e9e64..3a59d4e97c03 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -43,30 +43,22 @@ #include "shuffle.h" =20 #ifdef CONFIG_MHP_MEMMAP_ON_MEMORY -static int memmap_on_memory_set(const char *val, const struct kernel_param= *kp) -{ - if (hugetlb_optimize_vmemmap_enabled()) - return 0; - return param_set_bool(val, kp); -} - -static const struct kernel_param_ops memmap_on_memory_ops =3D { - .flags =3D KERNEL_PARAM_OPS_FL_NOARG, - .set =3D memmap_on_memory_set, - .get =3D param_get_bool, -}; - /* * memory_hotplug.memmap_on_memory parameter */ static bool memmap_on_memory __ro_after_init; -module_param_cb(memmap_on_memory, &memmap_on_memory_ops, &memmap_on_memory= , 0444); +module_param(memmap_on_memory, bool, 0444); MODULE_PARM_DESC(memmap_on_memory, "Enable memmap on memory for memory hot= plug"); =20 -bool mhp_memmap_on_memory(void) +static inline bool mhp_memmap_on_memory(void) { return memmap_on_memory; } +#else +static inline bool mhp_memmap_on_memory(void) +{ + return false; +} #endif =20 enum { @@ -1035,7 +1027,7 @@ int mhp_init_memmap_on_memory(unsigned long pfn, unsi= gned long nr_pages, struct zone *zone) { unsigned long end_pfn =3D pfn + nr_pages; - int ret; + int ret, i; =20 ret =3D kasan_add_zero_shadow(__va(PFN_PHYS(pfn)), PFN_PHYS(nr_pages)); if (ret) @@ -1043,6 +1035,9 @@ int mhp_init_memmap_on_memory(unsigned long pfn, unsi= gned long nr_pages, =20 move_pfn_range_to_zone(zone, pfn, nr_pages, NULL, MIGRATE_UNMOVABLE); =20 + for (i =3D 0; i < nr_pages; i++) + SetPageVmemmapSelfHosted(pfn_to_page(pfn + i)); + /* * It might be that the vmemmap_pages fully span sections. If that is * the case, mark those sections online here as otherwise they will be --=20 2.11.0