From nobody Mon Apr 27 15:22:25 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79DC5CCA479 for ; Tue, 28 Jun 2022 09:24:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344251AbiF1JYT (ORCPT ); Tue, 28 Jun 2022 05:24:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49444 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344187AbiF1JX4 (ORCPT ); Tue, 28 Jun 2022 05:23:56 -0400 Received: from mail-pf1-x434.google.com (mail-pf1-x434.google.com [IPv6:2607:f8b0:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 829851FCCD for ; Tue, 28 Jun 2022 02:23:53 -0700 (PDT) Received: by mail-pf1-x434.google.com with SMTP id bo5so11484366pfb.4 for ; Tue, 28 Jun 2022 02:23:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=g7r3zVYj3AvZ/3UidUc2ZI4PPOhwx3ZhlpO+4x/MXUA=; b=ZYRQ8+sE6n9u6F2XY8JCvWXwHfsNJkVRmGXVGMzdq4PLG7Ih9aIItBZtXjebp94oKv T+Z9BoixwlavlGo+fFGrnwD3vCjYJRtNsy222z6cPADrh7hflTPGEeo3ucVxuDoYjb4o ki+9urDmDyjqQL8kLeirpGizj19pLNX7gNAoQYWLOMo25EWLz4jS7SG9P+JNSckHpEja 2w8vIIVz6z01MSnU2CdAJJ/NE93gXAnpbvuifKgCGo3MwIRCXV/pQsfePV6oYDDIdODy MDYXzHlS2AE50nCgEZ824ZZ8+L9zEp7ZgSqZSE/XpB5kq+1QBNshwojokY/+y3EkoEW1 BH3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=g7r3zVYj3AvZ/3UidUc2ZI4PPOhwx3ZhlpO+4x/MXUA=; b=BKOv3SDm4URI0P8idhqCotwT5/sbDK/MzloS/6Tu9P9HErv0ys4HOsFhav+Ew4Tcr2 HGxBxCh0SkaUxxpmWJuNcU22XyAxQiPArzEtHrP9Izw/bL8WSa7rQkKwer+JsBemkqAw tUsDEPKg0+iKqJtFU9BZuuo83Xk0BG1abZQNYl0a/ZTZJpt6uot0+8pHxcs8c9ICGMFl /nLu7AN4Asi8ccyTySOpq8XI6WhNz4gqPhOUEJpUx6uihjs+a17FmYq7PQJcAZKx/VzJ T77ifaUP0JvMha4XoGBs371Tq+MX2nKOVj3HD4PxTktX50c9lLNBfCd7fXlv5hS7Lgr4 Vm3Q== X-Gm-Message-State: AJIora847gx6T94ErrcFV5nLQBN/xiZsqgS9gSJZduBrkU0kEppPD1N4 69Fyt/5dj14NvcKO35Gf0DsHIg== X-Google-Smtp-Source: AGRyM1v73L30amWxnk++qspTkF6VOQbffCogWNAnLP39+fpa769TYG6jfye+TfEhzCX83mAbbsbeug== X-Received: by 2002:aa7:81c1:0:b0:522:81c0:a646 with SMTP id c1-20020aa781c1000000b0052281c0a646mr3724444pfn.33.1656408233086; Tue, 28 Jun 2022 02:23:53 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([139.177.225.245]) by smtp.gmail.com with ESMTPSA id mm9-20020a17090b358900b001ec729d4f08sm8780463pjb.54.2022.06.28.02.23.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Jun 2022 02:23:52 -0700 (PDT) From: Muchun Song To: mike.kravetz@oracle.com, david@redhat.com, akpm@linux-foundation.org, corbet@lwn.net Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, duanxiongchun@bytedance.com, Muchun Song , Oscar Salvador , Catalin Marinas , Will Deacon , Anshuman Khandual Subject: [PATCH v2 1/8] mm: hugetlb_vmemmap: delete hugetlb_optimize_vmemmap_enabled() Date: Tue, 28 Jun 2022 17:22:28 +0800 Message-Id: <20220628092235.91270-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220628092235.91270-1-songmuchun@bytedance.com> References: <20220628092235.91270-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The name hugetlb_optimize_vmemmap_enabled() a bit confusing as it tests two conditions (enabled and pages in use). Instead of coming up to an appropriate name, we could just delete it. There is already a discussion about deleting it in thread [1]. There is only one user of hugetlb_optimize_vmemmap_enabled() outside of hugetlb_vmemmap, that is flush_dcache_page() in arch/arm64/mm/flush.c. However, it does not need to call hugetlb_optimize_vmemmap_enabled() in flush_dcache_page() since HugeTLB pages are always fully mapped and only head page will be set PG_dcache_clean meaning only head page's flag may need to be cleared (see commit cf5a501d985b). So it is easy to remove hugetlb_optimize_vmemmap_enabled(). Link: https://lore.kernel.org/all/c77c61c8-8a5a-87e8-db89-d04d8aaab4cc@orac= le.com/ [1] Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador Reviewed-by: Mike Kravetz Reviewed-by: Catalin Marinas Cc: Catalin Marinas Cc: Will Deacon Cc: Anshuman Khandual --- arch/arm64/mm/flush.c | 13 +++---------- include/linux/page-flags.h | 14 ++------------ 2 files changed, 5 insertions(+), 22 deletions(-) diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c index fc4f710e9820..5f9379b3c8c8 100644 --- a/arch/arm64/mm/flush.c +++ b/arch/arm64/mm/flush.c @@ -76,17 +76,10 @@ EXPORT_SYMBOL_GPL(__sync_icache_dcache); void flush_dcache_page(struct page *page) { /* - * Only the head page's flags of HugeTLB can be cleared since the tail - * vmemmap pages associated with each HugeTLB page are mapped with - * read-only when CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP is enabled (more - * details can refer to vmemmap_remap_pte()). Although - * __sync_icache_dcache() only set PG_dcache_clean flag on the head - * page struct, there is more than one page struct with PG_dcache_clean - * associated with the HugeTLB page since the head vmemmap page frame - * is reused (more details can refer to the comments above - * page_fixed_fake_head()). + * HugeTLB pages are always fully mapped and only head page will be + * set PG_dcache_clean (see comments in __sync_icache_dcache()). */ - if (hugetlb_optimize_vmemmap_enabled() && PageHuge(page)) + if (PageHuge(page)) page =3D compound_head(page); =20 if (test_bit(PG_dcache_clean, &page->flags)) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index ea19528564d1..2455405ab82b 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -208,12 +208,6 @@ enum pageflags { DECLARE_STATIC_KEY_MAYBE(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON, hugetlb_optimize_vmemmap_key); =20 -static __always_inline bool hugetlb_optimize_vmemmap_enabled(void) -{ - return static_branch_maybe(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_O= N, - &hugetlb_optimize_vmemmap_key); -} - /* * If the feature of optimizing vmemmap pages associated with each HugeTLB * page is enabled, the head vmemmap page frame is reused and all of the t= ail @@ -232,7 +226,8 @@ static __always_inline bool hugetlb_optimize_vmemmap_en= abled(void) */ static __always_inline const struct page *page_fixed_fake_head(const struc= t page *page) { - if (!hugetlb_optimize_vmemmap_enabled()) + if (!static_branch_maybe(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON, + &hugetlb_optimize_vmemmap_key)) return page; =20 /* @@ -260,11 +255,6 @@ static inline const struct page *page_fixed_fake_head(= const struct page *page) { return page; } - -static inline bool hugetlb_optimize_vmemmap_enabled(void) -{ - return false; -} #endif =20 static __always_inline int page_is_fake_head(struct page *page) --=20 2.11.0 From nobody Mon Apr 27 15:22:25 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E58A2C43334 for ; Tue, 28 Jun 2022 09:24:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344190AbiF1JYV (ORCPT ); Tue, 28 Jun 2022 05:24:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49642 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344189AbiF1JYF (ORCPT ); Tue, 28 Jun 2022 05:24:05 -0400 Received: from mail-pg1-x52d.google.com (mail-pg1-x52d.google.com [IPv6:2607:f8b0:4864:20::52d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DF47021E2D for ; Tue, 28 Jun 2022 02:23:57 -0700 (PDT) Received: by mail-pg1-x52d.google.com with SMTP id 23so11622730pgc.8 for ; Tue, 28 Jun 2022 02:23:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=vHYbuFYn6YlvT9Fdnx+EOgau5cAMH353uP8QsaR7HRc=; b=0nmaiILBL0Ishd8eJaNF98NYu3O4qdMxGBddKMrRd4lKI3pIXG2GNpFNO291L209NS ENsmr/FnVypwU6JuPmk8nxqdIYeX+7CSw8NXSCcpP1ppbnISwIxJ2POPJAsC6DTq8Esg Zs1H9dPG/lWkPaeYDBgoVc7YB/Cj4gQO0vl1Jcfp+ZHzJ7EAe8lD72i4dIS0TXtUIWAO Yzzg7jCU8Z0VPJmoXsGyQYF/6pGl6tpzJieZLxGcc7sl/gks2UHfYQxk9u/t8E6MTJv1 mzR4QNWxJDeNdOI8ldEEWIt2h0L1Ot3uduS//eR0kMfCVNqSeByotggZxNMD+gp5Ik3H mUGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vHYbuFYn6YlvT9Fdnx+EOgau5cAMH353uP8QsaR7HRc=; b=LVB5NiUXj1ZZWkxkhlNG0a6P1atAqz33/9hVM1qwvdya6uEsGdNbQ9Gx9qMWFRwvWg 2XosLKwSa2A3lzKP3o0XmEoG7PN5dQ8X1ANh8atSZcdDdegSpopOuhTZpipYi+bfVyy3 pOng0jM6wKHyFCMyCek6XVXnG2hs+52pY9+EFbgtqX4IT0/Udhghc7gkOg1lKttfD0yW CzUkrD/VBYaKpV0Hlm9FD7TnINfX30Pzc64jyX7Hnny1/gs1S2Pgg+HzJrEs8eZYwNZF H+3YqEIlwLGL4KeLHaAlqBqdVpJn24+Y18YnLeX6cmZNBhu0l31Kq32XFzIRk7Fv2YTV rXxw== X-Gm-Message-State: AJIora+jEkpu3v3+DSKhlTqvkgxL75+Mdh9YPUwPOyl9T31Pep3JNYKz OhNihCY6GRNbWVDBYxn8jmBBsA== X-Google-Smtp-Source: AGRyM1s6QKp3YFjgnn8w5wpZ4YzfRjz+2CBC2gen+rq5Ge2yaCfXPCuQ2lsG85yC5I4qOXZwEMrYdQ== X-Received: by 2002:a63:7c49:0:b0:40c:b3f9:18c5 with SMTP id l9-20020a637c49000000b0040cb3f918c5mr16240298pgn.588.1656408237450; Tue, 28 Jun 2022 02:23:57 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([139.177.225.245]) by smtp.gmail.com with ESMTPSA id mm9-20020a17090b358900b001ec729d4f08sm8780463pjb.54.2022.06.28.02.23.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Jun 2022 02:23:57 -0700 (PDT) From: Muchun Song To: mike.kravetz@oracle.com, david@redhat.com, akpm@linux-foundation.org, corbet@lwn.net Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, duanxiongchun@bytedance.com, Muchun Song , Oscar Salvador Subject: [PATCH v2 2/8] mm: hugetlb_vmemmap: optimize vmemmap_optimize_mode handling Date: Tue, 28 Jun 2022 17:22:29 +0800 Message-Id: <20220628092235.91270-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220628092235.91270-1-songmuchun@bytedance.com> References: <20220628092235.91270-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" We hold an another reference to hugetlb_optimize_vmemmap_key when making vmemmap_optimize_mode on, because we use static_key to tell memory_hotplug that memory_hotplug.memmap_on_memory should be overridden. However, this rule has gone when we have introduced PageVmemmapSelfHosted. Therefore, we could simplify vmemmap_optimize_mode handling by not holding an another reference to hugetlb_optimize_vmemmap_key. This also means that we not incur the extra page_fixed_fake_head checks if there are no vmemmap optinmized hugetlb pages after this change. Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador Reviewed-by: Mike Kravetz --- include/linux/page-flags.h | 6 ++--- mm/hugetlb_vmemmap.c | 65 +++++-------------------------------------= ---- 2 files changed, 9 insertions(+), 62 deletions(-) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 2455405ab82b..b44cc24d7496 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -205,8 +205,7 @@ enum pageflags { #ifndef __GENERATING_BOUNDS_H =20 #ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP -DECLARE_STATIC_KEY_MAYBE(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON, - hugetlb_optimize_vmemmap_key); +DECLARE_STATIC_KEY_FALSE(hugetlb_optimize_vmemmap_key); =20 /* * If the feature of optimizing vmemmap pages associated with each HugeTLB @@ -226,8 +225,7 @@ DECLARE_STATIC_KEY_MAYBE(CONFIG_HUGETLB_PAGE_OPTIMIZE_V= MEMMAP_DEFAULT_ON, */ static __always_inline const struct page *page_fixed_fake_head(const struc= t page *page) { - if (!static_branch_maybe(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON, - &hugetlb_optimize_vmemmap_key)) + if (!static_branch_unlikely(&hugetlb_optimize_vmemmap_key)) return page; =20 /* diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 6d9801bb3fec..0c2f15a35d62 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -23,42 +23,15 @@ #define RESERVE_VMEMMAP_NR 1U #define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) =20 -enum vmemmap_optimize_mode { - VMEMMAP_OPTIMIZE_OFF, - VMEMMAP_OPTIMIZE_ON, -}; - -DEFINE_STATIC_KEY_MAYBE(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON, - hugetlb_optimize_vmemmap_key); +DEFINE_STATIC_KEY_FALSE(hugetlb_optimize_vmemmap_key); EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key); =20 -static enum vmemmap_optimize_mode vmemmap_optimize_mode =3D +static bool vmemmap_optimize_enabled =3D IS_ENABLED(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON); =20 -static void vmemmap_optimize_mode_switch(enum vmemmap_optimize_mode to) -{ - if (vmemmap_optimize_mode =3D=3D to) - return; - - if (to =3D=3D VMEMMAP_OPTIMIZE_OFF) - static_branch_dec(&hugetlb_optimize_vmemmap_key); - else - static_branch_inc(&hugetlb_optimize_vmemmap_key); - WRITE_ONCE(vmemmap_optimize_mode, to); -} - static int __init hugetlb_vmemmap_early_param(char *buf) { - bool enable; - enum vmemmap_optimize_mode mode; - - if (kstrtobool(buf, &enable)) - return -EINVAL; - - mode =3D enable ? VMEMMAP_OPTIMIZE_ON : VMEMMAP_OPTIMIZE_OFF; - vmemmap_optimize_mode_switch(mode); - - return 0; + return kstrtobool(buf, &vmemmap_optimize_enabled); } early_param("hugetlb_free_vmemmap", hugetlb_vmemmap_early_param); =20 @@ -100,7 +73,7 @@ int hugetlb_vmemmap_alloc(struct hstate *h, struct page = *head) static unsigned int vmemmap_optimizable_pages(struct hstate *h, struct page *head) { - if (READ_ONCE(vmemmap_optimize_mode) =3D=3D VMEMMAP_OPTIMIZE_OFF) + if (!READ_ONCE(vmemmap_optimize_enabled)) return 0; =20 if (IS_ENABLED(CONFIG_MEMORY_HOTPLUG)) { @@ -191,7 +164,6 @@ void __init hugetlb_vmemmap_init(struct hstate *h) =20 if (!is_power_of_2(sizeof(struct page))) { pr_warn_once("cannot optimize vmemmap pages because \"struct page\" cros= ses page boundaries\n"); - static_branch_disable(&hugetlb_optimize_vmemmap_key); return; } =20 @@ -212,36 +184,13 @@ void __init hugetlb_vmemmap_init(struct hstate *h) } =20 #ifdef CONFIG_PROC_SYSCTL -static int hugetlb_optimize_vmemmap_handler(struct ctl_table *table, int w= rite, - void *buffer, size_t *length, - loff_t *ppos) -{ - int ret; - enum vmemmap_optimize_mode mode; - static DEFINE_MUTEX(sysctl_mutex); - - if (write && !capable(CAP_SYS_ADMIN)) - return -EPERM; - - mutex_lock(&sysctl_mutex); - mode =3D vmemmap_optimize_mode; - table->data =3D &mode; - ret =3D proc_dointvec_minmax(table, write, buffer, length, ppos); - if (write && !ret) - vmemmap_optimize_mode_switch(mode); - mutex_unlock(&sysctl_mutex); - - return ret; -} - static struct ctl_table hugetlb_vmemmap_sysctls[] =3D { { .procname =3D "hugetlb_optimize_vmemmap", - .maxlen =3D sizeof(enum vmemmap_optimize_mode), + .data =3D &vmemmap_optimize_enabled, + .maxlen =3D sizeof(int), .mode =3D 0644, - .proc_handler =3D hugetlb_optimize_vmemmap_handler, - .extra1 =3D SYSCTL_ZERO, - .extra2 =3D SYSCTL_ONE, + .proc_handler =3D proc_dobool, }, { } }; --=20 2.11.0 From nobody Mon Apr 27 15:22:25 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4FA6C433EF for ; Tue, 28 Jun 2022 09:24:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344258AbiF1JYa (ORCPT ); Tue, 28 Jun 2022 05:24:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49278 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344169AbiF1JYN (ORCPT ); Tue, 28 Jun 2022 05:24:13 -0400 Received: from mail-pg1-x532.google.com (mail-pg1-x532.google.com [IPv6:2607:f8b0:4864:20::532]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C5AB522BCE for ; Tue, 28 Jun 2022 02:24:01 -0700 (PDT) Received: by mail-pg1-x532.google.com with SMTP id h192so11654957pgc.4 for ; Tue, 28 Jun 2022 02:24:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=GadOaB9//lTjsftcfvLcU7ls1fHJ0eHs1kLHSyAzf48=; b=uIFwED43iTWw2M0vHEtBVjPF4wMne05WSqqCpyGzzgJLiqgFVBC/FT3uZs1BH9nB5S kGPj5F2xAffSLbTN2oXrNvLXriMHlfrgsaPFtIMaE4Ijx+PtwYPFp4BocQwa83WBLKPp f3NB8knmejgYe9/DZmjhsykYjXKjvnZP3IztFkPQYBe+H0EYJagTFTo3umMnRLQLhEDA 4fmKWjUQQ9ngNa/3AOmAwok+bglel39uGX2XC/z8S+QcvVUqQHHZH9gGYhU2qrEW6RZM CjltfcgaPlT26E83vT0NpmSuUDzm4yZKIybjVhj2dCTGTOn3HWxHFPt6L5iKjj6JRlxR /0oA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GadOaB9//lTjsftcfvLcU7ls1fHJ0eHs1kLHSyAzf48=; b=CGR9yOrZmMX5a3cylZrbpAKqfp78bHD5fgY3IuiQn2VABn6jRf9fW8Xbn/Wg/f9F4i FgDhfiTpuq3Sd6NOiBoxSEMgTH7oFZkKbXtf8IUxKbQxtWdQuo1wSu+N4957MZSOf/ch loYjmf1hch6Qj2TLwzB1tvCbn8dPxfFRy+bNU+H8xyfbR5TVKNtQnjEUvyPZzKwRpTgI jA1WXuZJEpR3ykZNpdQOQ8Va3+Tvsuzb8bXOUo9gHAZf6tJQ73LO6XYTGIfNoSbDDc+P owMgAayGLU7lgbJLHYfr2rxv6J2ZeeoqPBRZ0JzAvq8+0Dtla/h3O8zoVrAgdeI6DjxZ j93g== X-Gm-Message-State: AJIora9FZl8Ug8+bc/26SHxYNARkQXhnkEawD7u+4qAJWfBvpjTN1zFl TIUBVPOcknXN3Ds+XGFJkSF3Fg== X-Google-Smtp-Source: AGRyM1t6Ppx0XVD1JhEmnFdJgiMDcffWiFZvh9iTr4GrrpCCd/kYZfZVd3D5Pitcc4s9IFx/XV7BaQ== X-Received: by 2002:a63:904b:0:b0:40d:1d01:39aa with SMTP id a72-20020a63904b000000b0040d1d0139aamr16508254pge.68.1656408241430; Tue, 28 Jun 2022 02:24:01 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([139.177.225.245]) by smtp.gmail.com with ESMTPSA id mm9-20020a17090b358900b001ec729d4f08sm8780463pjb.54.2022.06.28.02.23.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Jun 2022 02:24:01 -0700 (PDT) From: Muchun Song To: mike.kravetz@oracle.com, david@redhat.com, akpm@linux-foundation.org, corbet@lwn.net Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, duanxiongchun@bytedance.com, Muchun Song , Oscar Salvador Subject: [PATCH v2 3/8] mm: hugetlb_vmemmap: introduce the name HVO Date: Tue, 28 Jun 2022 17:22:30 +0800 Message-Id: <20220628092235.91270-4-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220628092235.91270-1-songmuchun@bytedance.com> References: <20220628092235.91270-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" It it inconvenient to mention the feature of optimizing vmemmap pages assoc= iated with HugeTLB pages when communicating with others since there is no specifi= c or abbreviated name for it when it is first introduced. Let us give it a name= HVO (HugeTLB Vmemmap Optimization) from now. This commit also updates the document about "hugetlb_free_vmemmap" by the w= ay discussed in thread [1]. Link: https://lore.kernel.org/all/21aae898-d54d-cc4b-a11f-1bb7fddcfffa@redh= at.com/ [1] Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador Reviewed-by: Mike Kravetz --- Documentation/admin-guide/kernel-parameters.txt | 7 ++++--- Documentation/admin-guide/mm/hugetlbpage.rst | 4 ++-- Documentation/admin-guide/mm/memory-hotplug.rst | 4 ++-- Documentation/admin-guide/sysctl/vm.rst | 3 +-- Documentation/vm/vmemmap_dedup.rst | 2 ++ fs/Kconfig | 12 +++++------- include/linux/page-flags.h | 3 +-- mm/hugetlb_vmemmap.c | 8 ++++---- mm/hugetlb_vmemmap.h | 4 ++-- 9 files changed, 23 insertions(+), 24 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentatio= n/admin-guide/kernel-parameters.txt index 578eb9ef1089..1e30e826041d 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1725,12 +1725,13 @@ hugetlb_free_vmemmap=3D [KNL] Reguires CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP enabled. + Control if HugeTLB Vmemmap Optimization (HVO) is enabled. Allows heavy hugetlb users to free up some more memory (7 * PAGE_SIZE for each 2MB hugetlb page). - Format: { [oO][Nn]/Y/y/1 | [oO][Ff]/N/n/0 (default) } + Format: { on | off (default) } =20 - [oO][Nn]/Y/y/1: enable the feature - [oO][Ff]/N/n/0: disable the feature + on: enable HVO + off: disable HVO =20 Built with CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON=3Dy, the default is on. diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/a= dmin-guide/mm/hugetlbpage.rst index a90330d0a837..8e2727dc18d4 100644 --- a/Documentation/admin-guide/mm/hugetlbpage.rst +++ b/Documentation/admin-guide/mm/hugetlbpage.rst @@ -164,8 +164,8 @@ default_hugepagesz will all result in 256 2M huge pages being allocated. Valid default huge page size is architecture dependent. hugetlb_free_vmemmap - When CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP is set, this enables optimizing - unused vmemmap pages associated with each HugeTLB page. + When CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP is set, this enables HugeTLB + Vmemmap Optimization (HVO). =20 When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages`` indicates the current number of pre-allocated huge pages of the default si= ze. diff --git a/Documentation/admin-guide/mm/memory-hotplug.rst b/Documentatio= n/admin-guide/mm/memory-hotplug.rst index 0f56ecd8ac05..a3c9e8ad8fa0 100644 --- a/Documentation/admin-guide/mm/memory-hotplug.rst +++ b/Documentation/admin-guide/mm/memory-hotplug.rst @@ -653,8 +653,8 @@ block might fail: - Concurrent activity that operates on the same physical memory area, such= as allocating gigantic pages, can result in temporary offlining failures. =20 -- Out of memory when dissolving huge pages, especially when freeing unused - vmemmap pages associated with each hugetlb page is enabled. +- Out of memory when dissolving huge pages, especially when HugeTLB Vmemmap + Optimization (HVO) is enabled. =20 Offlining code may be able to migrate huge page contents, but may not be= able to dissolve the source huge page because it fails allocating (unmovable)= pages diff --git a/Documentation/admin-guide/sysctl/vm.rst b/Documentation/admin-= guide/sysctl/vm.rst index e3a952d1fd35..f15099eaaf36 100644 --- a/Documentation/admin-guide/sysctl/vm.rst +++ b/Documentation/admin-guide/sysctl/vm.rst @@ -569,8 +569,7 @@ This knob is not available when the size of 'struct pag= e' (a structure defined in include/linux/mm_types.h) is not power of two (an unusual system config= could result in this). =20 -Enable (set to 1) or disable (set to 0) the feature of optimizing vmemmap = pages -associated with each HugeTLB page. +Enable (set to 1) or disable (set to 0) HugeTLB Vmemmap Optimization (HVO). =20 Once enabled, the vmemmap pages of subsequent allocation of HugeTLB pages = from buddy allocator will be optimized (7 pages per 2MB HugeTLB page and 4095 p= ages diff --git a/Documentation/vm/vmemmap_dedup.rst b/Documentation/vm/vmemmap_= dedup.rst index c9c495f62d12..7d7a161aa364 100644 --- a/Documentation/vm/vmemmap_dedup.rst +++ b/Documentation/vm/vmemmap_dedup.rst @@ -7,6 +7,8 @@ A vmemmap diet for HugeTLB and Device DAX HugeTLB =3D=3D=3D=3D=3D=3D=3D =20 +This section is to explain how HugeTLB Vmemmap Optimization (HVO) works. + The struct page structures (page structs) are used to describe a physical page frame. By default, there is a one-to-one mapping from a page frame to it's corresponding page struct. diff --git a/fs/Kconfig b/fs/Kconfig index 5976eb33535f..a547307c1ae8 100644 --- a/fs/Kconfig +++ b/fs/Kconfig @@ -247,8 +247,7 @@ config HUGETLB_PAGE =20 # # Select this config option from the architecture Kconfig, if it is prefer= red -# to enable the feature of minimizing overhead of struct page associated w= ith -# each HugeTLB page. +# to enable the feature of HugeTLB Vmemmap Optimization (HVO). # config ARCH_WANT_HUGETLB_PAGE_OPTIMIZE_VMEMMAP bool @@ -259,14 +258,13 @@ config HUGETLB_PAGE_OPTIMIZE_VMEMMAP depends on SPARSEMEM_VMEMMAP =20 config HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON - bool "Default optimizing vmemmap pages of HugeTLB to on" + bool "HugeTLB Vmemmap Optimization (HVO) defaults to on" default n depends on HUGETLB_PAGE_OPTIMIZE_VMEMMAP help - When using HUGETLB_PAGE_OPTIMIZE_VMEMMAP, the optimizing unused vmemmap - pages associated with each HugeTLB page is default off. Say Y here - to enable optimizing vmemmap pages of HugeTLB by default. It can then - be disabled on the command line via hugetlb_free_vmemmap=3Doff. + The HugeTLB VmemmapvOptimization (HVO) defaults to off. Say Y here to + enable HVO by default. It can be disabled via hugetlb_free_vmemmap=3Doff + (boot command line) or hugetlb_optimize_vmemmap (sysctl). =20 config MEMFD_CREATE def_bool TMPFS || HUGETLBFS diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index b44cc24d7496..78ed46ae6ee5 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -208,8 +208,7 @@ enum pageflags { DECLARE_STATIC_KEY_FALSE(hugetlb_optimize_vmemmap_key); =20 /* - * If the feature of optimizing vmemmap pages associated with each HugeTLB - * page is enabled, the head vmemmap page frame is reused and all of the t= ail + * If HVO is enabled, the head vmemmap page frame is reused and all of the= tail * vmemmap addresses map to the head vmemmap page frame (furture details c= an * refer to the figure at the head of the mm/hugetlb_vmemmap.c). In other * words, there are more than one page struct with PG_head associated with= each diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 0c2f15a35d62..7161f86a43a6 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -1,8 +1,8 @@ // SPDX-License-Identifier: GPL-2.0 /* - * Optimize vmemmap pages associated with HugeTLB + * HugeTLB Vmemmap Optimization (HVO) * - * Copyright (c) 2020, Bytedance. All rights reserved. + * Copyright (c) 2020, ByteDance. All rights reserved. * * Author: Muchun Song * @@ -156,8 +156,8 @@ void __init hugetlb_vmemmap_init(struct hstate *h) =20 /* * There are only (RESERVE_VMEMMAP_SIZE / sizeof(struct page)) struct - * page structs that can be used when CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMA= P, - * so add a BUILD_BUG_ON to catch invalid usage of the tail struct page. + * page structs that can be used when HVO is enabled, add a BUILD_BUG_ON + * to catch invalid usage of the tail page structs. */ BUILD_BUG_ON(__NR_USED_SUBPAGE >=3D RESERVE_VMEMMAP_SIZE / sizeof(struct page)); diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 109b0a53b6fe..ba66fadad9fc 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -1,8 +1,8 @@ // SPDX-License-Identifier: GPL-2.0 /* - * Optimize vmemmap pages associated with HugeTLB + * HugeTLB Vmemmap Optimization (HVO) * - * Copyright (c) 2020, Bytedance. All rights reserved. + * Copyright (c) 2020, ByteDance. All rights reserved. * * Author: Muchun Song */ --=20 2.11.0 From nobody Mon Apr 27 15:22:25 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D53FC433EF for ; Tue, 28 Jun 2022 09:24:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344230AbiF1JYq (ORCPT ); Tue, 28 Jun 2022 05:24:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49910 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344232AbiF1JYQ (ORCPT ); Tue, 28 Jun 2022 05:24:16 -0400 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 65EF92714A for ; Tue, 28 Jun 2022 02:24:07 -0700 (PDT) Received: by mail-pj1-x102e.google.com with SMTP id 73-20020a17090a0fcf00b001eaee69f600so12054896pjz.1 for ; Tue, 28 Jun 2022 02:24:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=X2/RB3+Ylo+BjwVkQp6QpjNLTJ7vvFr+3MGAuX4BCfI=; b=hTYuoESc2659D83JbfCjjt2hxCQSKbSxw52BEkl1DtutB1p7qTDT/Yr0xrgFIEIzPC yLWZEbFj1WzOw0R7AN74LCKVr3Rt6UsW9WdDoAkuXe+ZZksjpd4dOjGjpJ85/R0jn+HH TSV3OEDdqi4+jGFJNKPP+/Rackw3RuplkkuCt9W6xZTjsMiEcO9UyO6aFZ0QC0OV2W3T 5SiAL3OnpLtUQDOUDTrOA1EV85UiREP8PCvov8HHA3xbTrRzJmJ/ymZo0A6FUiyoo8gu bm2Dx6QSAQVdWAfB3xRaN6CmNp/KTWQqLL08Y+gjoMy0jKCO3virVZmmcRDPP7PW7qQ9 /75A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=X2/RB3+Ylo+BjwVkQp6QpjNLTJ7vvFr+3MGAuX4BCfI=; b=2rKw6MOF95moHnzfk+p+0X92M8sKCzORaQRJluFP22RbNk/QIfM967DmV297GF0q+4 /xpAVT2f51rRBap3jmm7UKBCHaiYfNGhRIJn9arxu9GYMLbvWzhjIv23IXAwOxOdihlT B9Y6s1RreITpXfUULHzYVsSi+dQwRwKIdDtw5DlA4E1NhpII0c9qa8CmI3soP5sWP4dF grEHWB6UrgH6ts5maLtaDvTod8k5p+lQgj3EO0XEQTVgzVs7hD4T51bNjHuhbyaSD0g4 IIiHFlZ77SqhqSdNLPOPwn0ojD2K9i/glWYtk736BZf0pdz98uxX6MoU8xCvu2QiKOEB ZRSA== X-Gm-Message-State: AJIora9reg9NmJb51W0D63UvtKmavnbsZX+lPt6ZoLyLzsCKklrp1p0h TvStcdVxa4x+kOXFjCIivsrKSQ== X-Google-Smtp-Source: AGRyM1ukGrXfmDNpLme1K8coVYrQLc+/P8Yvnt3ePlYti+Rozoiy2bFumj4AVJREx5EZ8J9CfHKXhA== X-Received: by 2002:a17:903:110f:b0:16b:9350:84e6 with SMTP id n15-20020a170903110f00b0016b935084e6mr941191plh.142.1656408246658; Tue, 28 Jun 2022 02:24:06 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([139.177.225.245]) by smtp.gmail.com with ESMTPSA id mm9-20020a17090b358900b001ec729d4f08sm8780463pjb.54.2022.06.28.02.24.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Jun 2022 02:24:06 -0700 (PDT) From: Muchun Song To: mike.kravetz@oracle.com, david@redhat.com, akpm@linux-foundation.org, corbet@lwn.net Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, duanxiongchun@bytedance.com, Muchun Song , Oscar Salvador Subject: [PATCH v2 4/8] mm: hugetlb_vmemmap: move vmemmap code related to HugeTLB to hugetlb_vmemmap.c Date: Tue, 28 Jun 2022 17:22:31 +0800 Message-Id: <20220628092235.91270-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220628092235.91270-1-songmuchun@bytedance.com> References: <20220628092235.91270-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When I first introduced vmemmap manipulation functions related to HugeTLB, I thought those functions may be reused by other modules (e.g. using similar approach to optimize vmemmap pages, unfortunately, the DAX used the same approach but does not use those functions). After two years, we didn't see any other users. So move those functions to hugetlb_vmemmap.c. Code movement without any functional change. Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador Reviewed-by: Mike Kravetz --- include/linux/mm.h | 7 - mm/hugetlb_vmemmap.c | 399 +++++++++++++++++++++++++++++++++++++++++++++++= +++- mm/sparse-vmemmap.c | 399 -----------------------------------------------= ---- 3 files changed, 398 insertions(+), 407 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 6d4e9ce1a3c5..add9228f53b3 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3227,13 +3227,6 @@ static inline void print_vma_addr(char *prefix, unsi= gned long rip) } #endif =20 -#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP -int vmemmap_remap_free(unsigned long start, unsigned long end, - unsigned long reuse); -int vmemmap_remap_alloc(unsigned long start, unsigned long end, - unsigned long reuse, gfp_t gfp_mask); -#endif - void *sparse_buffer_alloc(unsigned long size); struct page * __populate_section_memmap(unsigned long pfn, unsigned long nr_pages, int nid, struct vmem_altmap *altmap, diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 7161f86a43a6..4d404d10c682 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -10,9 +10,31 @@ */ #define pr_fmt(fmt) "HugeTLB: " fmt =20 -#include +#include +#include +#include +#include #include "hugetlb_vmemmap.h" =20 +/** + * struct vmemmap_remap_walk - walk vmemmap page table + * + * @remap_pte: called for each lowest-level entry (PTE). + * @nr_walked: the number of walked pte. + * @reuse_page: the page which is reused for the tail vmemmap pages. + * @reuse_addr: the virtual address of the @reuse_page page. + * @vmemmap_pages: the list head of the vmemmap pages that can be freed + * or is mapped from. + */ +struct vmemmap_remap_walk { + void (*remap_pte)(pte_t *pte, unsigned long addr, + struct vmemmap_remap_walk *walk); + unsigned long nr_walked; + struct page *reuse_page; + unsigned long reuse_addr; + struct list_head *vmemmap_pages; +}; + /* * There are a lot of struct page structures associated with each HugeTLB = page. * For tail pages, the value of compound_head is the same. So we can reuse= first @@ -23,6 +45,381 @@ #define RESERVE_VMEMMAP_NR 1U #define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) =20 +static int __split_vmemmap_huge_pmd(pmd_t *pmd, unsigned long start) +{ + pmd_t __pmd; + int i; + unsigned long addr =3D start; + struct page *page =3D pmd_page(*pmd); + pte_t *pgtable =3D pte_alloc_one_kernel(&init_mm); + + if (!pgtable) + return -ENOMEM; + + pmd_populate_kernel(&init_mm, &__pmd, pgtable); + + for (i =3D 0; i < PMD_SIZE / PAGE_SIZE; i++, addr +=3D PAGE_SIZE) { + pte_t entry, *pte; + pgprot_t pgprot =3D PAGE_KERNEL; + + entry =3D mk_pte(page + i, pgprot); + pte =3D pte_offset_kernel(&__pmd, addr); + set_pte_at(&init_mm, addr, pte, entry); + } + + spin_lock(&init_mm.page_table_lock); + if (likely(pmd_leaf(*pmd))) { + /* + * Higher order allocations from buddy allocator must be able to + * be treated as indepdenent small pages (as they can be freed + * individually). + */ + if (!PageReserved(page)) + split_page(page, get_order(PMD_SIZE)); + + /* Make pte visible before pmd. See comment in pmd_install(). */ + smp_wmb(); + pmd_populate_kernel(&init_mm, pmd, pgtable); + flush_tlb_kernel_range(start, start + PMD_SIZE); + } else { + pte_free_kernel(&init_mm, pgtable); + } + spin_unlock(&init_mm.page_table_lock); + + return 0; +} + +static int split_vmemmap_huge_pmd(pmd_t *pmd, unsigned long start) +{ + int leaf; + + spin_lock(&init_mm.page_table_lock); + leaf =3D pmd_leaf(*pmd); + spin_unlock(&init_mm.page_table_lock); + + if (!leaf) + return 0; + + return __split_vmemmap_huge_pmd(pmd, start); +} + +static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr, + unsigned long end, + struct vmemmap_remap_walk *walk) +{ + pte_t *pte =3D pte_offset_kernel(pmd, addr); + + /* + * The reuse_page is found 'first' in table walk before we start + * remapping (which is calling @walk->remap_pte). + */ + if (!walk->reuse_page) { + walk->reuse_page =3D pte_page(*pte); + /* + * Because the reuse address is part of the range that we are + * walking, skip the reuse address range. + */ + addr +=3D PAGE_SIZE; + pte++; + walk->nr_walked++; + } + + for (; addr !=3D end; addr +=3D PAGE_SIZE, pte++) { + walk->remap_pte(pte, addr, walk); + walk->nr_walked++; + } +} + +static int vmemmap_pmd_range(pud_t *pud, unsigned long addr, + unsigned long end, + struct vmemmap_remap_walk *walk) +{ + pmd_t *pmd; + unsigned long next; + + pmd =3D pmd_offset(pud, addr); + do { + int ret; + + ret =3D split_vmemmap_huge_pmd(pmd, addr & PMD_MASK); + if (ret) + return ret; + + next =3D pmd_addr_end(addr, end); + vmemmap_pte_range(pmd, addr, next, walk); + } while (pmd++, addr =3D next, addr !=3D end); + + return 0; +} + +static int vmemmap_pud_range(p4d_t *p4d, unsigned long addr, + unsigned long end, + struct vmemmap_remap_walk *walk) +{ + pud_t *pud; + unsigned long next; + + pud =3D pud_offset(p4d, addr); + do { + int ret; + + next =3D pud_addr_end(addr, end); + ret =3D vmemmap_pmd_range(pud, addr, next, walk); + if (ret) + return ret; + } while (pud++, addr =3D next, addr !=3D end); + + return 0; +} + +static int vmemmap_p4d_range(pgd_t *pgd, unsigned long addr, + unsigned long end, + struct vmemmap_remap_walk *walk) +{ + p4d_t *p4d; + unsigned long next; + + p4d =3D p4d_offset(pgd, addr); + do { + int ret; + + next =3D p4d_addr_end(addr, end); + ret =3D vmemmap_pud_range(p4d, addr, next, walk); + if (ret) + return ret; + } while (p4d++, addr =3D next, addr !=3D end); + + return 0; +} + +static int vmemmap_remap_range(unsigned long start, unsigned long end, + struct vmemmap_remap_walk *walk) +{ + unsigned long addr =3D start; + unsigned long next; + pgd_t *pgd; + + VM_BUG_ON(!PAGE_ALIGNED(start)); + VM_BUG_ON(!PAGE_ALIGNED(end)); + + pgd =3D pgd_offset_k(addr); + do { + int ret; + + next =3D pgd_addr_end(addr, end); + ret =3D vmemmap_p4d_range(pgd, addr, next, walk); + if (ret) + return ret; + } while (pgd++, addr =3D next, addr !=3D end); + + /* + * We only change the mapping of the vmemmap virtual address range + * [@start + PAGE_SIZE, end), so we only need to flush the TLB which + * belongs to the range. + */ + flush_tlb_kernel_range(start + PAGE_SIZE, end); + + return 0; +} + +/* + * Free a vmemmap page. A vmemmap page can be allocated from the memblock + * allocator or buddy allocator. If the PG_reserved flag is set, it means + * that it allocated from the memblock allocator, just free it via the + * free_bootmem_page(). Otherwise, use __free_page(). + */ +static inline void free_vmemmap_page(struct page *page) +{ + if (PageReserved(page)) + free_bootmem_page(page); + else + __free_page(page); +} + +/* Free a list of the vmemmap pages */ +static void free_vmemmap_page_list(struct list_head *list) +{ + struct page *page, *next; + + list_for_each_entry_safe(page, next, list, lru) { + list_del(&page->lru); + free_vmemmap_page(page); + } +} + +static void vmemmap_remap_pte(pte_t *pte, unsigned long addr, + struct vmemmap_remap_walk *walk) +{ + /* + * Remap the tail pages as read-only to catch illegal write operation + * to the tail pages. + */ + pgprot_t pgprot =3D PAGE_KERNEL_RO; + pte_t entry =3D mk_pte(walk->reuse_page, pgprot); + struct page *page =3D pte_page(*pte); + + list_add_tail(&page->lru, walk->vmemmap_pages); + set_pte_at(&init_mm, addr, pte, entry); +} + +/* + * How many struct page structs need to be reset. When we reuse the head + * struct page, the special metadata (e.g. page->flags or page->mapping) + * cannot copy to the tail struct page structs. The invalid value will be + * checked in the free_tail_pages_check(). In order to avoid the message + * of "corrupted mapping in tail page". We need to reset at least 3 (one + * head struct page struct and two tail struct page structs) struct page + * structs. + */ +#define NR_RESET_STRUCT_PAGE 3 + +static inline void reset_struct_pages(struct page *start) +{ + int i; + struct page *from =3D start + NR_RESET_STRUCT_PAGE; + + for (i =3D 0; i < NR_RESET_STRUCT_PAGE; i++) + memcpy(start + i, from, sizeof(*from)); +} + +static void vmemmap_restore_pte(pte_t *pte, unsigned long addr, + struct vmemmap_remap_walk *walk) +{ + pgprot_t pgprot =3D PAGE_KERNEL; + struct page *page; + void *to; + + BUG_ON(pte_page(*pte) !=3D walk->reuse_page); + + page =3D list_first_entry(walk->vmemmap_pages, struct page, lru); + list_del(&page->lru); + to =3D page_to_virt(page); + copy_page(to, (void *)walk->reuse_addr); + reset_struct_pages(to); + + set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot)); +} + +/** + * vmemmap_remap_free - remap the vmemmap virtual address range [@start, @= end) + * to the page which @reuse is mapped to, then free vmemmap + * which the range are mapped to. + * @start: start address of the vmemmap virtual address range that we want + * to remap. + * @end: end address of the vmemmap virtual address range that we want to + * remap. + * @reuse: reuse address. + * + * Return: %0 on success, negative error code otherwise. + */ +static int vmemmap_remap_free(unsigned long start, unsigned long end, + unsigned long reuse) +{ + int ret; + LIST_HEAD(vmemmap_pages); + struct vmemmap_remap_walk walk =3D { + .remap_pte =3D vmemmap_remap_pte, + .reuse_addr =3D reuse, + .vmemmap_pages =3D &vmemmap_pages, + }; + + /* + * In order to make remapping routine most efficient for the huge pages, + * the routine of vmemmap page table walking has the following rules + * (see more details from the vmemmap_pte_range()): + * + * - The range [@start, @end) and the range [@reuse, @reuse + PAGE_SIZE) + * should be continuous. + * - The @reuse address is part of the range [@reuse, @end) that we are + * walking which is passed to vmemmap_remap_range(). + * - The @reuse address is the first in the complete range. + * + * So we need to make sure that @start and @reuse meet the above rules. + */ + BUG_ON(start - reuse !=3D PAGE_SIZE); + + mmap_read_lock(&init_mm); + ret =3D vmemmap_remap_range(reuse, end, &walk); + if (ret && walk.nr_walked) { + end =3D reuse + walk.nr_walked * PAGE_SIZE; + /* + * vmemmap_pages contains pages from the previous + * vmemmap_remap_range call which failed. These + * are pages which were removed from the vmemmap. + * They will be restored in the following call. + */ + walk =3D (struct vmemmap_remap_walk) { + .remap_pte =3D vmemmap_restore_pte, + .reuse_addr =3D reuse, + .vmemmap_pages =3D &vmemmap_pages, + }; + + vmemmap_remap_range(reuse, end, &walk); + } + mmap_read_unlock(&init_mm); + + free_vmemmap_page_list(&vmemmap_pages); + + return ret; +} + +static int alloc_vmemmap_page_list(unsigned long start, unsigned long end, + gfp_t gfp_mask, struct list_head *list) +{ + unsigned long nr_pages =3D (end - start) >> PAGE_SHIFT; + int nid =3D page_to_nid((struct page *)start); + struct page *page, *next; + + while (nr_pages--) { + page =3D alloc_pages_node(nid, gfp_mask, 0); + if (!page) + goto out; + list_add_tail(&page->lru, list); + } + + return 0; +out: + list_for_each_entry_safe(page, next, list, lru) + __free_pages(page, 0); + return -ENOMEM; +} + +/** + * vmemmap_remap_alloc - remap the vmemmap virtual address range [@start, = end) + * to the page which is from the @vmemmap_pages + * respectively. + * @start: start address of the vmemmap virtual address range that we want + * to remap. + * @end: end address of the vmemmap virtual address range that we want to + * remap. + * @reuse: reuse address. + * @gfp_mask: GFP flag for allocating vmemmap pages. + * + * Return: %0 on success, negative error code otherwise. + */ +static int vmemmap_remap_alloc(unsigned long start, unsigned long end, + unsigned long reuse, gfp_t gfp_mask) +{ + LIST_HEAD(vmemmap_pages); + struct vmemmap_remap_walk walk =3D { + .remap_pte =3D vmemmap_restore_pte, + .reuse_addr =3D reuse, + .vmemmap_pages =3D &vmemmap_pages, + }; + + /* See the comment in the vmemmap_remap_free(). */ + BUG_ON(start - reuse !=3D PAGE_SIZE); + + if (alloc_vmemmap_page_list(start, end, gfp_mask, &vmemmap_pages)) + return -ENOMEM; + + mmap_read_lock(&init_mm); + vmemmap_remap_range(reuse, end, &walk); + mmap_read_unlock(&init_mm); + + return 0; +} + DEFINE_STATIC_KEY_FALSE(hugetlb_optimize_vmemmap_key); EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key); =20 diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 3fdc34191dce..0d91374f1afb 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -27,408 +27,9 @@ #include #include #include -#include -#include =20 #include #include -#include - -#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP -/** - * struct vmemmap_remap_walk - walk vmemmap page table - * - * @remap_pte: called for each lowest-level entry (PTE). - * @nr_walked: the number of walked pte. - * @reuse_page: the page which is reused for the tail vmemmap pages. - * @reuse_addr: the virtual address of the @reuse_page page. - * @vmemmap_pages: the list head of the vmemmap pages that can be freed - * or is mapped from. - */ -struct vmemmap_remap_walk { - void (*remap_pte)(pte_t *pte, unsigned long addr, - struct vmemmap_remap_walk *walk); - unsigned long nr_walked; - struct page *reuse_page; - unsigned long reuse_addr; - struct list_head *vmemmap_pages; -}; - -static int __split_vmemmap_huge_pmd(pmd_t *pmd, unsigned long start) -{ - pmd_t __pmd; - int i; - unsigned long addr =3D start; - struct page *page =3D pmd_page(*pmd); - pte_t *pgtable =3D pte_alloc_one_kernel(&init_mm); - - if (!pgtable) - return -ENOMEM; - - pmd_populate_kernel(&init_mm, &__pmd, pgtable); - - for (i =3D 0; i < PMD_SIZE / PAGE_SIZE; i++, addr +=3D PAGE_SIZE) { - pte_t entry, *pte; - pgprot_t pgprot =3D PAGE_KERNEL; - - entry =3D mk_pte(page + i, pgprot); - pte =3D pte_offset_kernel(&__pmd, addr); - set_pte_at(&init_mm, addr, pte, entry); - } - - spin_lock(&init_mm.page_table_lock); - if (likely(pmd_leaf(*pmd))) { - /* - * Higher order allocations from buddy allocator must be able to - * be treated as indepdenent small pages (as they can be freed - * individually). - */ - if (!PageReserved(page)) - split_page(page, get_order(PMD_SIZE)); - - /* Make pte visible before pmd. See comment in pmd_install(). */ - smp_wmb(); - pmd_populate_kernel(&init_mm, pmd, pgtable); - flush_tlb_kernel_range(start, start + PMD_SIZE); - } else { - pte_free_kernel(&init_mm, pgtable); - } - spin_unlock(&init_mm.page_table_lock); - - return 0; -} - -static int split_vmemmap_huge_pmd(pmd_t *pmd, unsigned long start) -{ - int leaf; - - spin_lock(&init_mm.page_table_lock); - leaf =3D pmd_leaf(*pmd); - spin_unlock(&init_mm.page_table_lock); - - if (!leaf) - return 0; - - return __split_vmemmap_huge_pmd(pmd, start); -} - -static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr, - unsigned long end, - struct vmemmap_remap_walk *walk) -{ - pte_t *pte =3D pte_offset_kernel(pmd, addr); - - /* - * The reuse_page is found 'first' in table walk before we start - * remapping (which is calling @walk->remap_pte). - */ - if (!walk->reuse_page) { - walk->reuse_page =3D pte_page(*pte); - /* - * Because the reuse address is part of the range that we are - * walking, skip the reuse address range. - */ - addr +=3D PAGE_SIZE; - pte++; - walk->nr_walked++; - } - - for (; addr !=3D end; addr +=3D PAGE_SIZE, pte++) { - walk->remap_pte(pte, addr, walk); - walk->nr_walked++; - } -} - -static int vmemmap_pmd_range(pud_t *pud, unsigned long addr, - unsigned long end, - struct vmemmap_remap_walk *walk) -{ - pmd_t *pmd; - unsigned long next; - - pmd =3D pmd_offset(pud, addr); - do { - int ret; - - ret =3D split_vmemmap_huge_pmd(pmd, addr & PMD_MASK); - if (ret) - return ret; - - next =3D pmd_addr_end(addr, end); - vmemmap_pte_range(pmd, addr, next, walk); - } while (pmd++, addr =3D next, addr !=3D end); - - return 0; -} - -static int vmemmap_pud_range(p4d_t *p4d, unsigned long addr, - unsigned long end, - struct vmemmap_remap_walk *walk) -{ - pud_t *pud; - unsigned long next; - - pud =3D pud_offset(p4d, addr); - do { - int ret; - - next =3D pud_addr_end(addr, end); - ret =3D vmemmap_pmd_range(pud, addr, next, walk); - if (ret) - return ret; - } while (pud++, addr =3D next, addr !=3D end); - - return 0; -} - -static int vmemmap_p4d_range(pgd_t *pgd, unsigned long addr, - unsigned long end, - struct vmemmap_remap_walk *walk) -{ - p4d_t *p4d; - unsigned long next; - - p4d =3D p4d_offset(pgd, addr); - do { - int ret; - - next =3D p4d_addr_end(addr, end); - ret =3D vmemmap_pud_range(p4d, addr, next, walk); - if (ret) - return ret; - } while (p4d++, addr =3D next, addr !=3D end); - - return 0; -} - -static int vmemmap_remap_range(unsigned long start, unsigned long end, - struct vmemmap_remap_walk *walk) -{ - unsigned long addr =3D start; - unsigned long next; - pgd_t *pgd; - - VM_BUG_ON(!PAGE_ALIGNED(start)); - VM_BUG_ON(!PAGE_ALIGNED(end)); - - pgd =3D pgd_offset_k(addr); - do { - int ret; - - next =3D pgd_addr_end(addr, end); - ret =3D vmemmap_p4d_range(pgd, addr, next, walk); - if (ret) - return ret; - } while (pgd++, addr =3D next, addr !=3D end); - - /* - * We only change the mapping of the vmemmap virtual address range - * [@start + PAGE_SIZE, end), so we only need to flush the TLB which - * belongs to the range. - */ - flush_tlb_kernel_range(start + PAGE_SIZE, end); - - return 0; -} - -/* - * Free a vmemmap page. A vmemmap page can be allocated from the memblock - * allocator or buddy allocator. If the PG_reserved flag is set, it means - * that it allocated from the memblock allocator, just free it via the - * free_bootmem_page(). Otherwise, use __free_page(). - */ -static inline void free_vmemmap_page(struct page *page) -{ - if (PageReserved(page)) - free_bootmem_page(page); - else - __free_page(page); -} - -/* Free a list of the vmemmap pages */ -static void free_vmemmap_page_list(struct list_head *list) -{ - struct page *page, *next; - - list_for_each_entry_safe(page, next, list, lru) { - list_del(&page->lru); - free_vmemmap_page(page); - } -} - -static void vmemmap_remap_pte(pte_t *pte, unsigned long addr, - struct vmemmap_remap_walk *walk) -{ - /* - * Remap the tail pages as read-only to catch illegal write operation - * to the tail pages. - */ - pgprot_t pgprot =3D PAGE_KERNEL_RO; - pte_t entry =3D mk_pte(walk->reuse_page, pgprot); - struct page *page =3D pte_page(*pte); - - list_add_tail(&page->lru, walk->vmemmap_pages); - set_pte_at(&init_mm, addr, pte, entry); -} - -/* - * How many struct page structs need to be reset. When we reuse the head - * struct page, the special metadata (e.g. page->flags or page->mapping) - * cannot copy to the tail struct page structs. The invalid value will be - * checked in the free_tail_pages_check(). In order to avoid the message - * of "corrupted mapping in tail page". We need to reset at least 3 (one - * head struct page struct and two tail struct page structs) struct page - * structs. - */ -#define NR_RESET_STRUCT_PAGE 3 - -static inline void reset_struct_pages(struct page *start) -{ - int i; - struct page *from =3D start + NR_RESET_STRUCT_PAGE; - - for (i =3D 0; i < NR_RESET_STRUCT_PAGE; i++) - memcpy(start + i, from, sizeof(*from)); -} - -static void vmemmap_restore_pte(pte_t *pte, unsigned long addr, - struct vmemmap_remap_walk *walk) -{ - pgprot_t pgprot =3D PAGE_KERNEL; - struct page *page; - void *to; - - BUG_ON(pte_page(*pte) !=3D walk->reuse_page); - - page =3D list_first_entry(walk->vmemmap_pages, struct page, lru); - list_del(&page->lru); - to =3D page_to_virt(page); - copy_page(to, (void *)walk->reuse_addr); - reset_struct_pages(to); - - set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot)); -} - -/** - * vmemmap_remap_free - remap the vmemmap virtual address range [@start, @= end) - * to the page which @reuse is mapped to, then free vmemmap - * which the range are mapped to. - * @start: start address of the vmemmap virtual address range that we want - * to remap. - * @end: end address of the vmemmap virtual address range that we want to - * remap. - * @reuse: reuse address. - * - * Return: %0 on success, negative error code otherwise. - */ -int vmemmap_remap_free(unsigned long start, unsigned long end, - unsigned long reuse) -{ - int ret; - LIST_HEAD(vmemmap_pages); - struct vmemmap_remap_walk walk =3D { - .remap_pte =3D vmemmap_remap_pte, - .reuse_addr =3D reuse, - .vmemmap_pages =3D &vmemmap_pages, - }; - - /* - * In order to make remapping routine most efficient for the huge pages, - * the routine of vmemmap page table walking has the following rules - * (see more details from the vmemmap_pte_range()): - * - * - The range [@start, @end) and the range [@reuse, @reuse + PAGE_SIZE) - * should be continuous. - * - The @reuse address is part of the range [@reuse, @end) that we are - * walking which is passed to vmemmap_remap_range(). - * - The @reuse address is the first in the complete range. - * - * So we need to make sure that @start and @reuse meet the above rules. - */ - BUG_ON(start - reuse !=3D PAGE_SIZE); - - mmap_read_lock(&init_mm); - ret =3D vmemmap_remap_range(reuse, end, &walk); - if (ret && walk.nr_walked) { - end =3D reuse + walk.nr_walked * PAGE_SIZE; - /* - * vmemmap_pages contains pages from the previous - * vmemmap_remap_range call which failed. These - * are pages which were removed from the vmemmap. - * They will be restored in the following call. - */ - walk =3D (struct vmemmap_remap_walk) { - .remap_pte =3D vmemmap_restore_pte, - .reuse_addr =3D reuse, - .vmemmap_pages =3D &vmemmap_pages, - }; - - vmemmap_remap_range(reuse, end, &walk); - } - mmap_read_unlock(&init_mm); - - free_vmemmap_page_list(&vmemmap_pages); - - return ret; -} - -static int alloc_vmemmap_page_list(unsigned long start, unsigned long end, - gfp_t gfp_mask, struct list_head *list) -{ - unsigned long nr_pages =3D (end - start) >> PAGE_SHIFT; - int nid =3D page_to_nid((struct page *)start); - struct page *page, *next; - - while (nr_pages--) { - page =3D alloc_pages_node(nid, gfp_mask, 0); - if (!page) - goto out; - list_add_tail(&page->lru, list); - } - - return 0; -out: - list_for_each_entry_safe(page, next, list, lru) - __free_pages(page, 0); - return -ENOMEM; -} - -/** - * vmemmap_remap_alloc - remap the vmemmap virtual address range [@start, = end) - * to the page which is from the @vmemmap_pages - * respectively. - * @start: start address of the vmemmap virtual address range that we want - * to remap. - * @end: end address of the vmemmap virtual address range that we want to - * remap. - * @reuse: reuse address. - * @gfp_mask: GFP flag for allocating vmemmap pages. - * - * Return: %0 on success, negative error code otherwise. - */ -int vmemmap_remap_alloc(unsigned long start, unsigned long end, - unsigned long reuse, gfp_t gfp_mask) -{ - LIST_HEAD(vmemmap_pages); - struct vmemmap_remap_walk walk =3D { - .remap_pte =3D vmemmap_restore_pte, - .reuse_addr =3D reuse, - .vmemmap_pages =3D &vmemmap_pages, - }; - - /* See the comment in the vmemmap_remap_free(). */ - BUG_ON(start - reuse !=3D PAGE_SIZE); - - if (alloc_vmemmap_page_list(start, end, gfp_mask, &vmemmap_pages)) - return -ENOMEM; - - mmap_read_lock(&init_mm); - vmemmap_remap_range(reuse, end, &walk); - mmap_read_unlock(&init_mm); - - return 0; -} -#endif /* CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP */ =20 /* * Allocate a block of memory to be used to back the virtual memory map --=20 2.11.0 From nobody Mon Apr 27 15:22:25 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97E31C433EF for ; Tue, 28 Jun 2022 09:24:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344237AbiF1JYv (ORCPT ); Tue, 28 Jun 2022 05:24:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49934 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344184AbiF1JYQ (ORCPT ); Tue, 28 Jun 2022 05:24:16 -0400 Received: from mail-pg1-x52e.google.com (mail-pg1-x52e.google.com [IPv6:2607:f8b0:4864:20::52e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8103227FF8 for ; Tue, 28 Jun 2022 02:24:11 -0700 (PDT) Received: by mail-pg1-x52e.google.com with SMTP id r66so11648526pgr.2 for ; Tue, 28 Jun 2022 02:24:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=tKsl/FyBMuyoLwv1dbdkul0HUkyTRTbta9GTd/qg648=; b=kmlXP3rmTBXz5+4TJo//hmT2jB4MxmfWPs8iGQkHOlQBg+rsoPITThmujE5oZlqSyG fWu7CtFdyuxFIJPn+ajdlR9dCr4S51oy4WvnwewRQZdfq7E+j0mQSwzwRjSgIvguexqG fk3Qt+5y38+wp9WNkk+LMSBP0nc8nXemI+xT/4laLfvG8cccB66V1pbT2qVRKK9w1/7z UOZnxF6q0kGOYIWB/RFeprJs1fGdcR+PEaOaKE3JNBOyceQycTM2BgCSUGxmZLe99u1C xX95JGCPjlHlauI6BtduVriW4yYgE9uvq9xJq+thJNgUvkPEcXuFCutJkJYjKCnMEGKi LBKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=tKsl/FyBMuyoLwv1dbdkul0HUkyTRTbta9GTd/qg648=; b=iWgwWiT9CF5DU7dMuZve8Z75K49m0vmDrPzmxwJLUGOFZ7xDI9FsFtrKxlKs7BCjXe gqkhNXXiRAavRer8U4WI4PXMOLeEmINiWg7SCNbugLCWgM+/VC/wxO2++0pjQsgqSCm9 kYKADxXzXfdLmK61cepjkxPfrtAXuUtJcvf0MW2V0qiUNx1npFhiKtVpwBNjCnEbbi+K ssNLU8tXVjamMwUnCCNFuw0SD07N1NLLn+KCtBO3conL0Cg1no94QAka6wjuIbO8pGGJ ChLl88fAismrdqRKpRRmr0K5cSgmxh+x9akNzNEyAklVE28/Br7wonJSNC/ok3uBCJJE GvWA== X-Gm-Message-State: AJIora84oWEoyF/zFC9ePsBWXYHnzZPQtNqRNGXXLo2ZKLtA/cxcnLLv RK04+1PIba8/hKDvYrrAINfa6Q== X-Google-Smtp-Source: AGRyM1tM8i7pH8lTHkass0ZEXT2Z2SMP+f0quLdvC0M1TlAQly/dZen+wWI1RJzBbMXck8L6xsm+yg== X-Received: by 2002:a63:a748:0:b0:40c:9a36:ff9a with SMTP id w8-20020a63a748000000b0040c9a36ff9amr16326337pgo.545.1656408251045; Tue, 28 Jun 2022 02:24:11 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([139.177.225.245]) by smtp.gmail.com with ESMTPSA id mm9-20020a17090b358900b001ec729d4f08sm8780463pjb.54.2022.06.28.02.24.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Jun 2022 02:24:10 -0700 (PDT) From: Muchun Song To: mike.kravetz@oracle.com, david@redhat.com, akpm@linux-foundation.org, corbet@lwn.net Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, duanxiongchun@bytedance.com, Muchun Song Subject: [PATCH v2 5/8] mm: hugetlb_vmemmap: replace early_param() with core_param() Date: Tue, 28 Jun 2022 17:22:32 +0800 Message-Id: <20220628092235.91270-6-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220628092235.91270-1-songmuchun@bytedance.com> References: <20220628092235.91270-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" After the following commit: 78f39084b41d ("mm: hugetlb_vmemmap: add hugetlb_optimize_vmemmap sysctl") There is no order requirement between the parameter of "hugetlb_free_vmemma= p" and "hugepages" since we have removed the check of whether HVO is enabled from hugetlb_vmemmap_init(). Therefore we can safely replace early_param() with core_param() to simplify the code. Signed-off-by: Muchun Song Reviewed-by: Mike Kravetz --- mm/hugetlb_vmemmap.c | 10 ++-------- 1 file changed, 2 insertions(+), 8 deletions(-) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 4d404d10c682..b55be6d93f92 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -423,14 +423,8 @@ static int vmemmap_remap_alloc(unsigned long start, un= signed long end, DEFINE_STATIC_KEY_FALSE(hugetlb_optimize_vmemmap_key); EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key); =20 -static bool vmemmap_optimize_enabled =3D - IS_ENABLED(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON); - -static int __init hugetlb_vmemmap_early_param(char *buf) -{ - return kstrtobool(buf, &vmemmap_optimize_enabled); -} -early_param("hugetlb_free_vmemmap", hugetlb_vmemmap_early_param); +static bool vmemmap_optimize_enabled =3D IS_ENABLED(CONFIG_HUGETLB_PAGE_OP= TIMIZE_VMEMMAP_DEFAULT_ON); +core_param(hugetlb_free_vmemmap, vmemmap_optimize_enabled, bool, 0); =20 /* * Previously discarded vmemmap pages will be allocated and remapping --=20 2.11.0 From nobody Mon Apr 27 15:22:25 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1AAEAC433EF for ; Tue, 28 Jun 2022 09:25:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344153AbiF1JZD (ORCPT ); Tue, 28 Jun 2022 05:25:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49224 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344247AbiF1JYS (ORCPT ); Tue, 28 Jun 2022 05:24:18 -0400 Received: from mail-pg1-x52d.google.com (mail-pg1-x52d.google.com [IPv6:2607:f8b0:4864:20::52d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3740BBC97 for ; Tue, 28 Jun 2022 02:24:15 -0700 (PDT) Received: by mail-pg1-x52d.google.com with SMTP id 23so11622730pgc.8 for ; Tue, 28 Jun 2022 02:24:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4Ev5x51IaUxcPyHkCSZ2L4XnT+k/5ncETxxmqnTjq8Q=; b=YxppcmGzi/NWs7RPlScthtEyAuya93w3YyGwHW9UbPAwrxyAwYAYeHoWmCH7o8ul4I bEUNyw3In5gDSm3r5Bshs4Uz9CKZrdq72sH2Swjp26CizWC08PqoQbxDvoszGXBCOZLC qGPZvMtsjKZ83S5Fnw0eThTofg7lXyHxnNga1udwdF6GwAY9PmaX1vACYPQQUQwZXRqW pe0e3foiZjS2JY7N8xxr8QX/WidLJq4RpgLYmXQOxDedT0rEUF3zcgmzCRm9MC7HjNeg wSB0xqYnBSvxZiDsBdkeATEVb9odD1wyvOLg7S6N1JRvEUpB5++IRTvdNSQrrG8NhIDj mDTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4Ev5x51IaUxcPyHkCSZ2L4XnT+k/5ncETxxmqnTjq8Q=; b=HEzdoBNhVGpx0M/nCGVA4EJhlGnTZXbjfV8WAyxz2OCjP30SuLfC/dxX4fIbb9oOXf t1NoCpEOY8chDIym46CuxdAUrEO6IxvCLDYEvCVndmJjvBqXubPiKKBgmEWHwxzbwvto 7NXZmOf+4vozGhDn+SojvumwYbUGt5kE64S8BeKq0B772uBj3uooVUcPQepjywJmMeWS XN+w/BqKKFL2QNSfLCrj6DFP8S/yE5y3CIPOHpx0g8pZZ54yhvTTQd5EQN1RhlbqXlTW tdCD3RUW4Z1R6r+pvyu1ndb/qzdvVId9OKDiqfL64i2dZ3LkZ0Lg3Vu/vylPE2um6gvo +Exg== X-Gm-Message-State: AJIora8iJ4RgeuthPvu5lH8Ct0jLQf2S5FAydXZs7a/bKyTCn+bNWIMm lM9LE1Dc222p3garMB4NAyM08Q== X-Google-Smtp-Source: AGRyM1vmH8vT2MBBhRYAggEjycBPEgjMZT4FkFBztcfwWg1liC/UWrLB2ZjswPFO5nhXoprZMbvq4Q== X-Received: by 2002:a63:3409:0:b0:40c:9736:287 with SMTP id b9-20020a633409000000b0040c97360287mr16900834pga.14.1656408254792; Tue, 28 Jun 2022 02:24:14 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([139.177.225.245]) by smtp.gmail.com with ESMTPSA id mm9-20020a17090b358900b001ec729d4f08sm8780463pjb.54.2022.06.28.02.24.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Jun 2022 02:24:14 -0700 (PDT) From: Muchun Song To: mike.kravetz@oracle.com, david@redhat.com, akpm@linux-foundation.org, corbet@lwn.net Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, duanxiongchun@bytedance.com, Muchun Song Subject: [PATCH v2 6/8] mm: hugetlb_vmemmap: improve hugetlb_vmemmap code readability Date: Tue, 28 Jun 2022 17:22:33 +0800 Message-Id: <20220628092235.91270-7-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220628092235.91270-1-songmuchun@bytedance.com> References: <20220628092235.91270-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" There is a discussion about the name of hugetlb_vmemmap_alloc/free in thread [1]. The suggestion suggested by David is rename "alloc/free" to "optimize/restore" to make functionalities clearer to users, "optimize" means the function will optimize vmemmap pages, while "restore" means restoring its vmemmap pages discared before. This commit does this. Another discussion is the confusion RESERVE_VMEMMAP_NR isn't used explicitly for vmemmap_addr but implicitly for vmemmap_end in hugetlb_vmemmap_alloc/free. David suggested we can compute what hugetlb_vmemmap_init() does now at runtime. We do not need to worry for the overhead of computing at runtime since the calculation is simple enough and those functions are not in a hot path. This commit has the following improvements: 1) The function suffixed name ("optimize/restore") is more expressive. 2) The logic becomes less weird in hugetlb_vmemmap_optimize/restore(). 3) The hugetlb_vmemmap_init() does not need to be exported anymore. 4) A ->optimize_vmemmap_pages field in struct hstate is killed. 5) There is only one place where checks is_power_of_2(sizeof(struct page)) instead of two places. 6) Add more comments for hugetlb_vmemmap_optimize/restore(). 7) For external users, hugetlb_optimize_vmemmap_pages() is used for detecting if the HugeTLB's vmemmap pages is optimizable originally. In this commit, it is killed and we introduce a new helper hugetlb_vmemmap_optimizable() to replace it. The name is more expressive. Link: https://lore.kernel.org/all/20220404074652.68024-2-songmuchun@bytedan= ce.com/ [1] Signed-off-by: Muchun Song Reviewed-by: Mike Kravetz --- include/linux/hugetlb.h | 7 +-- include/linux/sysctl.h | 4 ++ mm/hugetlb.c | 15 ++--- mm/hugetlb_vmemmap.c | 143 ++++++++++++++++++++------------------------= ---- mm/hugetlb_vmemmap.h | 41 +++++++++----- 5 files changed, 102 insertions(+), 108 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 3bb98434550a..0d790fa3f297 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -641,9 +641,6 @@ struct hstate { unsigned int nr_huge_pages_node[MAX_NUMNODES]; unsigned int free_huge_pages_node[MAX_NUMNODES]; unsigned int surplus_huge_pages_node[MAX_NUMNODES]; -#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP - unsigned int optimize_vmemmap_pages; -#endif #ifdef CONFIG_CGROUP_HUGETLB /* cgroup control files */ struct cftype cgroup_files_dfl[8]; @@ -719,7 +716,7 @@ static inline struct hstate *hstate_vma(struct vm_area_= struct *vma) return hstate_file(vma->vm_file); } =20 -static inline unsigned long huge_page_size(struct hstate *h) +static inline unsigned long huge_page_size(const struct hstate *h) { return (unsigned long)PAGE_SIZE << h->order; } @@ -748,7 +745,7 @@ static inline bool hstate_is_gigantic(struct hstate *h) return huge_page_order(h) >=3D MAX_ORDER; } =20 -static inline unsigned int pages_per_huge_page(struct hstate *h) +static inline unsigned int pages_per_huge_page(const struct hstate *h) { return 1 << h->order; } diff --git a/include/linux/sysctl.h b/include/linux/sysctl.h index 80263f7cdb77..5a227b9e3ad5 100644 --- a/include/linux/sysctl.h +++ b/include/linux/sysctl.h @@ -266,6 +266,10 @@ static inline struct ctl_table_header *register_sysctl= _table(struct ctl_table * return NULL; } =20 +static inline void register_sysctl_init(const char *path, struct ctl_table= *table) +{ +} + static inline struct ctl_table_header *register_sysctl_mount_point(const c= har *path) { return NULL; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 559084d96082..bd413466682b 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1535,7 +1535,7 @@ static void __update_and_free_page(struct hstate *h, = struct page *page) if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) return; =20 - if (hugetlb_vmemmap_alloc(h, page)) { + if (hugetlb_vmemmap_restore(h, page)) { spin_lock_irq(&hugetlb_lock); /* * If we cannot allocate vmemmap pages, just refuse to free the @@ -1621,7 +1621,7 @@ static DECLARE_WORK(free_hpage_work, free_hpage_workf= n); =20 static inline void flush_free_hpage_work(struct hstate *h) { - if (hugetlb_optimize_vmemmap_pages(h)) + if (hugetlb_vmemmap_optimizable(h)) flush_work(&free_hpage_work); } =20 @@ -1743,7 +1743,7 @@ static void __prep_account_new_huge_page(struct hstat= e *h, int nid) =20 static void __prep_new_huge_page(struct hstate *h, struct page *page) { - hugetlb_vmemmap_free(h, page); + hugetlb_vmemmap_optimize(h, page); INIT_LIST_HEAD(&page->lru); set_compound_page_dtor(page, HUGETLB_PAGE_DTOR); hugetlb_set_page_subpool(page, NULL); @@ -2116,7 +2116,7 @@ int dissolve_free_huge_page(struct page *page) * Attempt to allocate vmemmmap here so that we can take * appropriate action on failure. */ - rc =3D hugetlb_vmemmap_alloc(h, head); + rc =3D hugetlb_vmemmap_restore(h, head); if (!rc) { /* * Move PageHWPoison flag from head page to the raw @@ -3191,8 +3191,10 @@ static void __init report_hugepages(void) char buf[32]; =20 string_get_size(huge_page_size(h), 1, STRING_UNITS_2, buf, 32); - pr_info("HugeTLB registered %s page size, pre-allocated %ld pages\n", + pr_info("HugeTLB: registered %s page size, pre-allocated %ld pages\n", buf, h->free_huge_pages); + pr_info("HugeTLB: %d KiB vmemmap can be freed for a %s page\n", + hugetlb_vmemmap_optimizable_size(h) / SZ_1K, buf); } } =20 @@ -3430,7 +3432,7 @@ static int demote_free_huge_page(struct hstate *h, st= ruct page *page) remove_hugetlb_page_for_demote(h, page, false); spin_unlock_irq(&hugetlb_lock); =20 - rc =3D hugetlb_vmemmap_alloc(h, page); + rc =3D hugetlb_vmemmap_restore(h, page); if (rc) { /* Allocation of vmemmmap failed, we can not demote page */ spin_lock_irq(&hugetlb_lock); @@ -4120,7 +4122,6 @@ void __init hugetlb_add_hstate(unsigned int order) h->next_nid_to_free =3D first_memory_node; snprintf(h->name, HSTATE_NAME_LEN, "hugepages-%lukB", huge_page_size(h)/1024); - hugetlb_vmemmap_init(h); =20 parsed_hstate =3D h; } diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index b55be6d93f92..6bbc445b1a66 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -35,16 +35,6 @@ struct vmemmap_remap_walk { struct list_head *vmemmap_pages; }; =20 -/* - * There are a lot of struct page structures associated with each HugeTLB = page. - * For tail pages, the value of compound_head is the same. So we can reuse= first - * page of head page structures. We map the virtual addresses of all the p= ages - * of tail page structures to the head page struct, and then free these pa= ge - * frames. Therefore, we need to reserve one pages as vmemmap areas. - */ -#define RESERVE_VMEMMAP_NR 1U -#define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) - static int __split_vmemmap_huge_pmd(pmd_t *pmd, unsigned long start) { pmd_t __pmd; @@ -426,32 +416,37 @@ EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key); static bool vmemmap_optimize_enabled =3D IS_ENABLED(CONFIG_HUGETLB_PAGE_OP= TIMIZE_VMEMMAP_DEFAULT_ON); core_param(hugetlb_free_vmemmap, vmemmap_optimize_enabled, bool, 0); =20 -/* - * Previously discarded vmemmap pages will be allocated and remapping - * after this function returns zero. +/** + * hugetlb_vmemmap_restore - restore previously optimized (by + * hugetlb_vmemmap_optimize()) vmemmap pages which + * will be reallocated and remapped. + * @h: struct hstate. + * @head: the head page whose vmemmap pages will be restored. + * + * Return: %0 if @head's vmemmap pages have been reallocated and remapped, + * negative error code otherwise. */ -int hugetlb_vmemmap_alloc(struct hstate *h, struct page *head) +int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head) { int ret; - unsigned long vmemmap_addr =3D (unsigned long)head; - unsigned long vmemmap_end, vmemmap_reuse, vmemmap_pages; + unsigned long vmemmap_start =3D (unsigned long)head, vmemmap_end; + unsigned long vmemmap_reuse; =20 if (!HPageVmemmapOptimized(head)) return 0; =20 - vmemmap_addr +=3D RESERVE_VMEMMAP_SIZE; - vmemmap_pages =3D hugetlb_optimize_vmemmap_pages(h); - vmemmap_end =3D vmemmap_addr + (vmemmap_pages << PAGE_SHIFT); - vmemmap_reuse =3D vmemmap_addr - PAGE_SIZE; + vmemmap_end =3D vmemmap_start + hugetlb_vmemmap_size(h); + vmemmap_reuse =3D vmemmap_start; + vmemmap_start +=3D HUGETLB_VMEMMAP_RESERVE_SIZE; =20 /* - * The pages which the vmemmap virtual address range [@vmemmap_addr, + * The pages which the vmemmap virtual address range [@vmemmap_start, * @vmemmap_end) are mapped to are freed to the buddy allocator, and * the range is mapped to the page which @vmemmap_reuse is mapped to. * When a HugeTLB page is freed to the buddy allocator, previously * discarded vmemmap pages must be allocated and remapping. */ - ret =3D vmemmap_remap_alloc(vmemmap_addr, vmemmap_end, vmemmap_reuse, + ret =3D vmemmap_remap_alloc(vmemmap_start, vmemmap_end, vmemmap_reuse, GFP_KERNEL | __GFP_NORETRY | __GFP_THISNODE); if (!ret) { ClearHPageVmemmapOptimized(head); @@ -461,11 +456,14 @@ int hugetlb_vmemmap_alloc(struct hstate *h, struct pa= ge *head) return ret; } =20 -static unsigned int vmemmap_optimizable_pages(struct hstate *h, - struct page *head) +/* Return true iff a HugeTLB whose vmemmap should and can be optimized. */ +static bool vmemmap_should_optimize(const struct hstate *h, const struct p= age *head) { if (!READ_ONCE(vmemmap_optimize_enabled)) - return 0; + return false; + + if (!hugetlb_vmemmap_optimizable(h)) + return false; =20 if (IS_ENABLED(CONFIG_MEMORY_HOTPLUG)) { pmd_t *pmdp, pmd; @@ -508,73 +506,47 @@ static unsigned int vmemmap_optimizable_pages(struct = hstate *h, * +-------------------------------------------+ */ if (PageVmemmapSelfHosted(vmemmap_page)) - return 0; + return false; } =20 - return hugetlb_optimize_vmemmap_pages(h); + return true; } =20 -void hugetlb_vmemmap_free(struct hstate *h, struct page *head) +/** + * hugetlb_vmemmap_optimize - optimize @head page's vmemmap pages. + * @h: struct hstate. + * @head: the head page whose vmemmap pages will be optimized. + * + * This function only tries to optimize @head's vmemmap pages and does not + * guarantee that the optimization will succeed after it returns. The call= er + * can use HPageVmemmapOptimized(@head) to detect if @head's vmemmap pages + * have been optimized. + */ +void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head) { - unsigned long vmemmap_addr =3D (unsigned long)head; - unsigned long vmemmap_end, vmemmap_reuse, vmemmap_pages; + unsigned long vmemmap_start =3D (unsigned long)head, vmemmap_end; + unsigned long vmemmap_reuse; =20 - vmemmap_pages =3D vmemmap_optimizable_pages(h, head); - if (!vmemmap_pages) + if (!vmemmap_should_optimize(h, head)) return; =20 static_branch_inc(&hugetlb_optimize_vmemmap_key); =20 - vmemmap_addr +=3D RESERVE_VMEMMAP_SIZE; - vmemmap_end =3D vmemmap_addr + (vmemmap_pages << PAGE_SHIFT); - vmemmap_reuse =3D vmemmap_addr - PAGE_SIZE; + vmemmap_end =3D vmemmap_start + hugetlb_vmemmap_size(h); + vmemmap_reuse =3D vmemmap_start; + vmemmap_start +=3D HUGETLB_VMEMMAP_RESERVE_SIZE; =20 /* - * Remap the vmemmap virtual address range [@vmemmap_addr, @vmemmap_end) + * Remap the vmemmap virtual address range [@vmemmap_start, @vmemmap_end) * to the page which @vmemmap_reuse is mapped to, then free the pages - * which the range [@vmemmap_addr, @vmemmap_end] is mapped to. + * which the range [@vmemmap_start, @vmemmap_end] is mapped to. */ - if (vmemmap_remap_free(vmemmap_addr, vmemmap_end, vmemmap_reuse)) + if (vmemmap_remap_free(vmemmap_start, vmemmap_end, vmemmap_reuse)) static_branch_dec(&hugetlb_optimize_vmemmap_key); else SetHPageVmemmapOptimized(head); } =20 -void __init hugetlb_vmemmap_init(struct hstate *h) -{ - unsigned int nr_pages =3D pages_per_huge_page(h); - unsigned int vmemmap_pages; - - /* - * There are only (RESERVE_VMEMMAP_SIZE / sizeof(struct page)) struct - * page structs that can be used when HVO is enabled, add a BUILD_BUG_ON - * to catch invalid usage of the tail page structs. - */ - BUILD_BUG_ON(__NR_USED_SUBPAGE >=3D - RESERVE_VMEMMAP_SIZE / sizeof(struct page)); - - if (!is_power_of_2(sizeof(struct page))) { - pr_warn_once("cannot optimize vmemmap pages because \"struct page\" cros= ses page boundaries\n"); - return; - } - - vmemmap_pages =3D (nr_pages * sizeof(struct page)) >> PAGE_SHIFT; - /* - * The head page is not to be freed to buddy allocator, the other tail - * pages will map to the head page, so they can be freed. - * - * Could RESERVE_VMEMMAP_NR be greater than @vmemmap_pages? It is true - * on some architectures (e.g. aarch64). See Documentation/arm64/ - * hugetlbpage.rst for more details. - */ - if (likely(vmemmap_pages > RESERVE_VMEMMAP_NR)) - h->optimize_vmemmap_pages =3D vmemmap_pages - RESERVE_VMEMMAP_NR; - - pr_info("can optimize %d vmemmap pages for %s\n", - h->optimize_vmemmap_pages, h->name); -} - -#ifdef CONFIG_PROC_SYSCTL static struct ctl_table hugetlb_vmemmap_sysctls[] =3D { { .procname =3D "hugetlb_optimize_vmemmap", @@ -586,16 +558,21 @@ static struct ctl_table hugetlb_vmemmap_sysctls[] =3D= { { } }; =20 -static __init int hugetlb_vmemmap_sysctls_init(void) +static int __init hugetlb_vmemmap_init(void) { - /* - * If "struct page" crosses page boundaries, the vmemmap pages cannot - * be optimized. - */ - if (is_power_of_2(sizeof(struct page))) - register_sysctl_init("vm", hugetlb_vmemmap_sysctls); - + /* HUGETLB_VMEMMAP_RESERVE_SIZE should cover all used struct pages */ + BUILD_BUG_ON(__NR_USED_SUBPAGE * sizeof(struct page) > HUGETLB_VMEMMAP_RE= SERVE_SIZE); + + if (IS_ENABLED(CONFIG_PROC_SYSCTL)) { + const struct hstate *h; + + for_each_hstate(h) { + if (hugetlb_vmemmap_optimizable(h)) { + register_sysctl_init("vm", hugetlb_vmemmap_sysctls); + break; + } + } + } return 0; } -late_initcall(hugetlb_vmemmap_sysctls_init); -#endif /* CONFIG_PROC_SYSCTL */ +late_initcall(hugetlb_vmemmap_init); diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index ba66fadad9fc..25bd0e002431 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -11,35 +11,50 @@ #include =20 #ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP -int hugetlb_vmemmap_alloc(struct hstate *h, struct page *head); -void hugetlb_vmemmap_free(struct hstate *h, struct page *head); -void hugetlb_vmemmap_init(struct hstate *h); +int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head); +void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head); =20 /* - * How many vmemmap pages associated with a HugeTLB page that can be - * optimized and freed to the buddy allocator. + * Reserve one vmemmap page, all vmemmap addresses are mapped to it. See + * Documentation/vm/vmemmap_dedup.rst. */ -static inline unsigned int hugetlb_optimize_vmemmap_pages(struct hstate *h) +#define HUGETLB_VMEMMAP_RESERVE_SIZE PAGE_SIZE + +static inline unsigned int hugetlb_vmemmap_size(const struct hstate *h) { - return h->optimize_vmemmap_pages; + return pages_per_huge_page(h) * sizeof(struct page); +} + +/* + * Return how many vmemmap size associated with a HugeTLB page that can be + * optimized and can be freed to the buddy allocator. + */ +static inline unsigned int hugetlb_vmemmap_optimizable_size(const struct h= state *h) +{ + int size =3D hugetlb_vmemmap_size(h) - HUGETLB_VMEMMAP_RESERVE_SIZE; + + if (!is_power_of_2(sizeof(struct page))) + return 0; + return size > 0 ? size : 0; } #else -static inline int hugetlb_vmemmap_alloc(struct hstate *h, struct page *hea= d) +static inline int hugetlb_vmemmap_restore(const struct hstate *h, struct p= age *head) { return 0; } =20 -static inline void hugetlb_vmemmap_free(struct hstate *h, struct page *hea= d) +static inline void hugetlb_vmemmap_optimize(const struct hstate *h, struct= page *head) { } =20 -static inline void hugetlb_vmemmap_init(struct hstate *h) +static inline unsigned int hugetlb_vmemmap_optimizable_size(const struct h= state *h) { + return 0; } +#endif /* CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP */ =20 -static inline unsigned int hugetlb_optimize_vmemmap_pages(struct hstate *h) +static inline bool hugetlb_vmemmap_optimizable(const struct hstate *h) { - return 0; + return hugetlb_vmemmap_optimizable_size(h) !=3D 0; } -#endif /* CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP */ #endif /* _LINUX_HUGETLB_VMEMMAP_H */ --=20 2.11.0 From nobody Mon Apr 27 15:22:25 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93A61C433EF for ; Tue, 28 Jun 2022 09:25:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344194AbiF1JZI (ORCPT ); Tue, 28 Jun 2022 05:25:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49278 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344163AbiF1JY2 (ORCPT ); Tue, 28 Jun 2022 05:24:28 -0400 Received: from mail-pg1-x52e.google.com (mail-pg1-x52e.google.com [IPv6:2607:f8b0:4864:20::52e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 32C9821AD for ; Tue, 28 Jun 2022 02:24:19 -0700 (PDT) Received: by mail-pg1-x52e.google.com with SMTP id q140so11632031pgq.6 for ; Tue, 28 Jun 2022 02:24:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FcU70QMaMfr3g25zaA8M15gvbgJmAnvOkNkO5LYq9Xk=; b=nxmq2NL1AsajSBuAASKmt7L1Ndphc4w4VDOV8lBInHq4/aQsuByXWdxEZnI9OjDrgz EHF5dVo71EC7lBBR1SdcpIM4kymuL6tfuB/WSHGErVoaOEdagQ4urXNcFRhXJnqVBe6q GgmY+X6nkhHeYXVaogO6IWsCqj8/A7AGDUdY21r4Tl00bHDdFT8JpzM54bnf6xpQOnQJ pDECDo2F2Zpct3+7IZx6V1+6Wq3gSI1fcI9Q278MtG+mnjiNWWGVr3ea+CBns4LgBg6K dvrHpnBYs2FdLlM2U0GTT75Lbrbqq8jprIO7BoXCtofAbd4lMzGPSnJIu5i/hn2lUu7Q ll8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FcU70QMaMfr3g25zaA8M15gvbgJmAnvOkNkO5LYq9Xk=; b=bvdQISRLbbxw9W+JDhg5RX5v6MnNCfcUu8RQq1CxOCZS5D3NgXUx34asGXBI6q9VqU kNnla2D4GMHVSBsBlVKbhHjz0vgQIG7h7h6Xus08eLvd2l6/rm5Gr+Yte9H7SuMmSbfD 72dVuiEWa5ua7eNqB3oQMMc5FyXb/MkjlH01E54YnHUToc6TzzlGVzza80CLSL56ZbDt I/YcYfq0gWSnsUaGr7lDnDOrqBiiAh7alpywjh3UntqJ2UPNf0FABrDnmMqj+7QTcyeA wsG3awCYMu1eo5siQ7zZEmaPK8+zxEdKGwWGF4zqhlt5QlfU7QT4NE5kDkBSYK9LZcck KzOQ== X-Gm-Message-State: AJIora9CocZo+d1QvZBvlH18dhK5eXBhyaWoYX/IsIyChZx1f9eDdb1c CanEns0CCCfxS091A5S9YoX2zA== X-Google-Smtp-Source: AGRyM1tICWGYY59eelLzcQWEILTXv80hvBvwVj1VoT/MD0xt3s4Kr1fmosQR+9g95CUba/+n3Pje0g== X-Received: by 2002:a63:eb0e:0:b0:40d:c8d5:3fa7 with SMTP id t14-20020a63eb0e000000b0040dc8d53fa7mr13895023pgh.331.1656408258490; Tue, 28 Jun 2022 02:24:18 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([139.177.225.245]) by smtp.gmail.com with ESMTPSA id mm9-20020a17090b358900b001ec729d4f08sm8780463pjb.54.2022.06.28.02.24.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Jun 2022 02:24:18 -0700 (PDT) From: Muchun Song To: mike.kravetz@oracle.com, david@redhat.com, akpm@linux-foundation.org, corbet@lwn.net Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, duanxiongchun@bytedance.com, Muchun Song Subject: [PATCH v2 7/8] mm: hugetlb_vmemmap: move code comments to vmemmap_dedup.rst Date: Tue, 28 Jun 2022 17:22:34 +0800 Message-Id: <20220628092235.91270-8-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220628092235.91270-1-songmuchun@bytedance.com> References: <20220628092235.91270-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" All the comments which explains how HVO works are moved to vmemmap_dedup.rs= t since commit 4917f55b4ef9 ("mm/sparse-vmemmap: improve memory savings for compo= und devmaps") except some comments above page_fixed_fake_head(). This commit move those c= omments to vmemmap_dedup.rst and improve vmemmap_dedup.rst as well. Signed-off-by: Muchun Song --- Documentation/vm/vmemmap_dedup.rst | 70 +++++++++++++++++++++++++---------= ---- include/linux/page-flags.h | 15 ++------ 2 files changed, 49 insertions(+), 36 deletions(-) diff --git a/Documentation/vm/vmemmap_dedup.rst b/Documentation/vm/vmemmap_= dedup.rst index 7d7a161aa364..a4b12ff906c4 100644 --- a/Documentation/vm/vmemmap_dedup.rst +++ b/Documentation/vm/vmemmap_dedup.rst @@ -9,23 +9,23 @@ HugeTLB =20 This section is to explain how HugeTLB Vmemmap Optimization (HVO) works. =20 -The struct page structures (page structs) are used to describe a physical -page frame. By default, there is a one-to-one mapping from a page frame to -it's corresponding page struct. +The ``struct page`` structures are used to describe a physical page frame.= By +default, there is a one-to-one mapping from a page frame to it's correspon= ding +``struct page``. =20 HugeTLB pages consist of multiple base page size pages and is supported by= many architectures. See Documentation/admin-guide/mm/hugetlbpage.rst for more details. On the x86-64 architecture, HugeTLB pages of size 2MB and 1GB are currently supported. Since the base page size on x86 is 4KB, a 2MB HugeTLB= page consists of 512 base pages and a 1GB HugeTLB page consists of 4096 base pa= ges. -For each base page, there is a corresponding page struct. +For each base page, there is a corresponding ``struct page``. =20 -Within the HugeTLB subsystem, only the first 4 page structs are used to -contain unique information about a HugeTLB page. __NR_USED_SUBPAGE provides -this upper limit. The only 'useful' information in the remaining page stru= cts +Within the HugeTLB subsystem, only the first 4 ``struct page`` are used to +contain unique information about a HugeTLB page. ``__NR_USED_SUBPAGE`` pro= vides +this upper limit. The only 'useful' information in the remaining ``struct = page`` is the compound_head field, and this field is the same for all tail pages. =20 -By removing redundant page structs for HugeTLB pages, memory can be return= ed +By removing redundant ``struct page`` for HugeTLB pages, memory can be ret= urned to the buddy allocator for other uses. =20 Different architectures support different HugeTLB pages. For example, the @@ -46,7 +46,7 @@ page. | | 64KB | 2MB | 512MB | 16GB | = | +--------------+-----------+-----------+-----------+-----------+----------= -+ =20 -When the system boot up, every HugeTLB page has more than one struct page +When the system boot up, every HugeTLB page has more than one ``struct pag= e`` structs which size is (unit: pages):: =20 struct_size =3D HugeTLB_Size / PAGE_SIZE * sizeof(struct page) / PAGE_S= IZE @@ -76,10 +76,10 @@ Where n is how many pte entries which one page can cont= ains. So the value of n is (PAGE_SIZE / sizeof(pte_t)). =20 This optimization only supports 64-bit system, so the value of sizeof(pte_= t) -is 8. And this optimization also applicable only when the size of struct p= age -is a power of two. In most cases, the size of struct page is 64 bytes (e.g. +is 8. And this optimization also applicable only when the size of ``struct= page`` +is a power of two. In most cases, the size of ``struct page`` is 64 bytes = (e.g. x86-64 and arm64). So if we use pmd level mapping for a HugeTLB page, the -size of struct page structs of it is 8 page frames which size depends on t= he +size of ``struct page`` structs of it is 8 page frames which size depends = on the size of the base page. =20 For the HugeTLB page of the pud level mapping, then:: @@ -88,7 +88,7 @@ For the HugeTLB page of the pud level mapping, then:: =3D PAGE_SIZE / 8 * 8 (pages) =3D PAGE_SIZE (pages) =20 -Where the struct_size(pmd) is the size of the struct page structs of a +Where the struct_size(pmd) is the size of the ``struct page`` structs of a HugeTLB page of the pmd level mapping. =20 E.g.: A 2MB HugeTLB page on x86_64 consists in 8 page frames while 1GB @@ -96,7 +96,7 @@ HugeTLB page consists in 4096. =20 Next, we take the pmd level mapping of the HugeTLB page as an example to show the internal implementation of this optimization. There are 8 pages -struct page structs associated with a HugeTLB page which is pmd mapped. +``struct page`` structs associated with a HugeTLB page which is pmd mapped. =20 Here is how things look before optimization:: =20 @@ -124,10 +124,10 @@ Here is how things look before optimization:: +-----------+ =20 The value of page->compound_head is the same for all tail pages. The first -page of page structs (page 0) associated with the HugeTLB page contains th= e 4 -page structs necessary to describe the HugeTLB. The only use of the remain= ing -pages of page structs (page 1 to page 7) is to point to page->compound_hea= d. -Therefore, we can remap pages 1 to 7 to page 0. Only 1 page of page structs +page of ``struct page`` (page 0) associated with the HugeTLB page contains= the 4 +``struct page`` necessary to describe the HugeTLB. The only use of the rem= aining +pages of ``struct page`` (page 1 to page 7) is to point to page->compound_= head. +Therefore, we can remap pages 1 to 7 to page 0. Only 1 page of ``struct pa= ge`` will be used for each HugeTLB page. This will allow us to free the remaini= ng 7 pages to the buddy allocator. =20 @@ -169,13 +169,37 @@ entries that can be cached in a single TLB entry. =20 The contiguous bit is used to increase the mapping size at the pmd and pte (last) level. So this type of HugeTLB page can be optimized only when its -size of the struct page structs is greater than 1 page. +size of the ``struct page`` structs is greater than **1** page. =20 Notice: The head vmemmap page is not freed to the buddy allocator and all tail vmemmap pages are mapped to the head vmemmap page frame. So we can see -more than one struct page struct with PG_head (e.g. 8 per 2 MB HugeTLB pag= e) -associated with each HugeTLB page. The compound_head() can handle this -correctly (more details refer to the comment above compound_head()). +more than one ``struct page`` struct with ``PG_head`` (e.g. 8 per 2 MB Hug= eTLB +page) associated with each HugeTLB page. The ``compound_head()`` can handle +this correctly. There is only **one** head ``struct page``, the tail +``struct page`` with ``PG_head`` are fake head ``struct page``. We need an +approach to distinguish between those two different types of ``struct page= `` so +that ``compound_head()`` can return the real head ``struct page`` when the +parameter is the tail ``struct page`` but with ``PG_head``. The following = code +snippet describes how to distinguish between real and fake head ``struct p= age``. + +.. code-block:: c + + if (test_bit(PG_head, &page->flags)) { + unsigned long head =3D READ_ONCE(page[1].compound_head); + + if (head & 1) { + if (head =3D=3D (unsigned long)page + 1) + /* head struct page */ + else + /* tail struct page */ + } else { + /* head struct page */ + } + } + +We can safely access the field of the **page[1]** with ``PG_head`` because= the +page is a compound page composed with at least two contiguous pages. +The implementation refers to ``page_fixed_fake_head()``. =20 Device DAX =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D @@ -189,7 +213,7 @@ PMD_SIZE (2M on x86_64) and PUD_SIZE (1G on x86_64). =20 The differences with HugeTLB are relatively minor. =20 -It only use 3 page structs for storing all information as opposed +It only use 3 ``struct page`` for storing all information as opposed to 4 on HugeTLB pages. =20 There's no remapping of vmemmap given that device-dax memory is not part of diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 78ed46ae6ee5..62864cad4a2a 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -208,19 +208,8 @@ enum pageflags { DECLARE_STATIC_KEY_FALSE(hugetlb_optimize_vmemmap_key); =20 /* - * If HVO is enabled, the head vmemmap page frame is reused and all of the= tail - * vmemmap addresses map to the head vmemmap page frame (furture details c= an - * refer to the figure at the head of the mm/hugetlb_vmemmap.c). In other - * words, there are more than one page struct with PG_head associated with= each - * HugeTLB page. We __know__ that there is only one head page struct, the= tail - * page structs with PG_head are fake head page structs. We need an appro= ach - * to distinguish between those two different types of page structs so that - * compound_head() can return the real head page struct when the parameter= is - * the tail page struct but with PG_head. - * - * The page_fixed_fake_head() returns the real head page struct if the @pa= ge is - * fake page head, otherwise, returns @page which can either be a true page - * head or tail. + * Return the real head page struct iff the @page is a fake head page, oth= erwise + * return the @page itself. See Documentation/vm/vmemmap_dedup.rst. */ static __always_inline const struct page *page_fixed_fake_head(const struc= t page *page) { --=20 2.11.0 From nobody Mon Apr 27 15:22:25 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1C9BC43334 for ; Tue, 28 Jun 2022 09:25:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344215AbiF1JZR (ORCPT ); Tue, 28 Jun 2022 05:25:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49858 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344257AbiF1JYa (ORCPT ); Tue, 28 Jun 2022 05:24:30 -0400 Received: from mail-pf1-x42c.google.com (mail-pf1-x42c.google.com [IPv6:2607:f8b0:4864:20::42c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C07BD22B03 for ; Tue, 28 Jun 2022 02:24:22 -0700 (PDT) Received: by mail-pf1-x42c.google.com with SMTP id 128so11443052pfv.12 for ; Tue, 28 Jun 2022 02:24:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=HfUjscIbkwVw1h7ZbqwVjjayoThyClFWWQF+WJosK0o=; b=EGLNwsQDxvaVFN8CeCrbJ+70RIMc1tVatGf97UIdEN2EjTlJem9LBckUcEMvfqNlNn QGlqj+u8qVtZY5bvtBdftuNbfM+nkNAiZ+z63aQLD1IPYmC7kZjnEx6Y96sBnJQTqgfE aDqtY2LDm5s0vnLK8DhThwZa+wTWmEeXEARzvdSaqFMJuZOTHDxOpuoomdW+9+Y05jFP pDgrstcy/ouAFxG6blugqbvAgAlXSMCogjLDL6nCoX+je4tqT0xKRdBeC3SiiE/52zZO sJRGwT+879wpDjsISVao7aJixX0Tq3Aqa7e5AQY/zx/IxtCOaa9vVafDm45setubdPOx 0MbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HfUjscIbkwVw1h7ZbqwVjjayoThyClFWWQF+WJosK0o=; b=6SmJ1wiEkybeQK3xoHNFnAp1/6ImjirskbsP2Hedf1z9IBdGvwiokj9Yc1Dtxk8NDD hzMxv/A1kd+4Xts0/BzTBjzg6iR8X3BAv6QDhifowvoQE/J7UHtoJobAVVQjwttEaGzb fucPnaM1PgHqvwjYkXyHdJTSDKmcaVtgrHdJfC1djBxJ261GTHvf/5YQaiV8+H7ULudw zqvIm49WMy+y4gVhigriyCaQ6hWi6T6iiKNTF5g6a7CcWw3FfIgirWfvOk2efJztNAI9 3fa9HfpLsYct1iz1cW494bSFE1jyEJcPbzxM3JreA/m7II2fD6uRo7YSard6YCN4hrgj D4mQ== X-Gm-Message-State: AJIora8xp1cMGN9Wzn17b8JA+/agOfAEAaknz3vbZ7rY2WEQNWdmlXS8 4Nq3NCHCWPU5qhG6BCYYAGRJA3mMerW+EM9f X-Google-Smtp-Source: AGRyM1t2iPDeZtckQmi9/trLyiw6ixc7ucyEJw2PH0844IZPGmE/175fmI1VMaPArEBzl2GddFK1xg== X-Received: by 2002:a63:6a85:0:b0:3fa:722a:fbdc with SMTP id f127-20020a636a85000000b003fa722afbdcmr17032017pgc.174.1656408262357; Tue, 28 Jun 2022 02:24:22 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([139.177.225.245]) by smtp.gmail.com with ESMTPSA id mm9-20020a17090b358900b001ec729d4f08sm8780463pjb.54.2022.06.28.02.24.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Jun 2022 02:24:22 -0700 (PDT) From: Muchun Song To: mike.kravetz@oracle.com, david@redhat.com, akpm@linux-foundation.org, corbet@lwn.net Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, duanxiongchun@bytedance.com, Muchun Song Subject: [PATCH v2 8/8] mm: hugetlb_vmemmap: use PTRS_PER_PTE instead of PMD_SIZE / PAGE_SIZE Date: Tue, 28 Jun 2022 17:22:35 +0800 Message-Id: <20220628092235.91270-9-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220628092235.91270-1-songmuchun@bytedance.com> References: <20220628092235.91270-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" There is already a macro PTRS_PER_PTE to represent the number of page table entries, just use it. Signed-off-by: Muchun Song --- mm/hugetlb_vmemmap.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 6bbc445b1a66..65b527e1799c 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -48,7 +48,7 @@ static int __split_vmemmap_huge_pmd(pmd_t *pmd, unsigned = long start) =20 pmd_populate_kernel(&init_mm, &__pmd, pgtable); =20 - for (i =3D 0; i < PMD_SIZE / PAGE_SIZE; i++, addr +=3D PAGE_SIZE) { + for (i =3D 0; i < PTRS_PER_PTE; i++, addr +=3D PAGE_SIZE) { pte_t entry, *pte; pgprot_t pgprot =3D PAGE_KERNEL; =20 --=20 2.11.0