From nobody Fri May 8 03:09:06 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3D2CC433F5 for ; Thu, 12 May 2022 04:12:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346225AbiELEMY (ORCPT ); Thu, 12 May 2022 00:12:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34284 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346049AbiELEMQ (ORCPT ); Thu, 12 May 2022 00:12:16 -0400 Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DEA631C12EC for ; Wed, 11 May 2022 21:12:15 -0700 (PDT) Received: by mail-pl1-x62d.google.com with SMTP id n8so3770805plh.1 for ; Wed, 11 May 2022 21:12:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ODWBdYqZzg8lChYiYiJ2sSIUo4NFfWVrgG6YTxSVP1A=; b=Ngz3D77zs/xp/vA5hFw2cdVMSnC9oUL2wHY3z6LuPl03ViyKz1AScEySaj0q61TiCz uqU+p3NVdxinAmmFIiVPWACAd+D+OJwT1pvROHz93ukkMBxzYHMMa6HJk/5MSUOc3/q7 r8xUk0sBKeepaJ9/TLXUsvd1YXF0Jsibof8Jm94zf08p1DtPfr0PQ2WHxpGNFi1S/YtB BxbHoPbBz7VPBLD68majR/kPzuxNmP/5/jHY/ak5hJlBKrZCq+ww6hGUEzGSVKkG7zVd UBkf5XzMz8oeuQvEO0yB1EP0sLz6A2aB+VfK/cUX/E50kYCTLfk7rnXyoOjuvhBkA9JD xvzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ODWBdYqZzg8lChYiYiJ2sSIUo4NFfWVrgG6YTxSVP1A=; b=c5zBfP31zzCDT4kElwvL2Ic78Ym6Du0MK8JHOjCbUHdGTEYWNo2eR0qkkk2yE8c9O7 Zc8HbuavQkNZRfdKuNV+TSaDKDijBis/DdeoXOLOFaSnPGVOf+CTRXSybOQz2e+lhpQ7 uvQw19314cPy0UEWRrzzsjDID9/X47YY0bUwiU/pm8+oRuFP4pAQ/srDv+6xW3MiVgk/ Pl7nBrpc9EPbOF6Lh/Uhk9X+0ODZAGTLVJa3ieFdCtrZ0zLYlta/CGV4Bc6ZApcBZW+0 INK0EDNlAq1AdU+xCPlf4I2hc79Cy0SsnJHhfySZg9J2Nj43UMOAYYOHVHz1vtf/xcTe I/3A== X-Gm-Message-State: AOAM533bbp44em/XBHOrTTWiHf6rQ6cqoExaIIMEapg68cuXBNiGalI2 5emFk83NAxDzUKwgjQlWQNuomA== X-Google-Smtp-Source: ABdhPJwqaLJm3UoDc04JP46WrAIcZihbJfdbwPqN7BfOz3+DV9hE3OQoNbHhMpW+tWmXbzr/1bgWNA== X-Received: by 2002:a17:90b:4b84:b0:1dc:93c0:ba01 with SMTP id lr4-20020a17090b4b8400b001dc93c0ba01mr8869867pjb.70.1652328735414; Wed, 11 May 2022 21:12:15 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([139.177.225.234]) by smtp.gmail.com with ESMTPSA id q13-20020a170902edcd00b0015e8d4eb2dcsm2695161plk.294.2022.05.11.21.12.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 May 2022 21:12:15 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, akpm@linux-foundation.org, mcgrof@kernel.org, keescook@chromium.org, yzaikin@google.com, osalvador@suse.de, david@redhat.com, masahiroy@kernel.org Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v11 1/4] mm: hugetlb_vmemmap: disable hugetlb_optimize_vmemmap when struct page crosses page boundaries Date: Thu, 12 May 2022 12:11:39 +0800 Message-Id: <20220512041142.39501-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220512041142.39501-1-songmuchun@bytedance.com> References: <20220512041142.39501-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" If the size of "struct page" is not the power of two but with the feature of minimizing overhead of struct page associated with each HugeTLB is enabled, then the vmemmap pages of HugeTLB will be corrupted after remapping (panic is about to happen in theory). But this only exists when !CONFIG_MEMCG && !CONFIG_SLUB on x86_64. However, it is not a conventional configuration nowadays. So it is not a real word issue, just the result of a code review. But we cannot prevent anyone from configuring that combined configure. This hugetlb_optimize_vmemmap should be disable in this case to fix this issue. Signed-off-by: Muchun Song Reviewed-by: Mike Kravetz --- mm/hugetlb_vmemmap.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 29554c6ef2ae..6254bb2d4ae5 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -28,12 +28,6 @@ EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key); =20 static int __init hugetlb_vmemmap_early_param(char *buf) { - /* We cannot optimize if a "struct page" crosses page boundaries. */ - if (!is_power_of_2(sizeof(struct page))) { - pr_warn("cannot free vmemmap pages because \"struct page\" crosses page = boundaries\n"); - return 0; - } - if (!buf) return -EINVAL; =20 @@ -119,6 +113,12 @@ void __init hugetlb_vmemmap_init(struct hstate *h) if (!hugetlb_optimize_vmemmap_enabled()) return; =20 + if (!is_power_of_2(sizeof(struct page))) { + pr_warn_once("cannot optimize vmemmap pages because \"struct page\" cros= ses page boundaries\n"); + static_branch_disable(&hugetlb_optimize_vmemmap_key); + return; + } + vmemmap_pages =3D (nr_pages * sizeof(struct page)) >> PAGE_SHIFT; /* * The head page is not to be freed to buddy allocator, the other tail --=20 2.11.0 From nobody Fri May 8 03:09:06 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD7C6C433EF for ; Thu, 12 May 2022 04:12:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346207AbiELEMa (ORCPT ); Thu, 12 May 2022 00:12:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34772 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346200AbiELEMY (ORCPT ); Thu, 12 May 2022 00:12:24 -0400 Received: from mail-pf1-x434.google.com (mail-pf1-x434.google.com [IPv6:2607:f8b0:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 184401E82C1 for ; Wed, 11 May 2022 21:12:21 -0700 (PDT) Received: by mail-pf1-x434.google.com with SMTP id a11so3722026pff.1 for ; Wed, 11 May 2022 21:12:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TxGePxN899QZhoK9/ghhqKhjvB6kwBatns9zvkXLNoM=; b=5zL6Ux4e5oQOa5p8UgoVRa+EphNIOXbDGvd4yValnePwa3YJG1ssf/8rGQRoNCsyrt CuDmkNTu7sLWTMq08LcZxESPc3ozL+U78k5n5c3HjkUCFvpypuZHMp/PgqSnMlDy+eKU NqbbL9vpL/W+bIgm1t9Ucz4DHkXVi8bN2zlZtKYRNltw+TRNmzJj/K2XOXpWi0y69Gee Eb/71eBVxQ/nccsjGzZWx133Y3/Y3OiynRyJNxUoZZlwv4HyUR21aSM94644h+rZvrH1 Y9tLCKei7VC4m7oEInWdeYlEaPrS4BE2wjBSyAc5VUogoyIGVMkXMeMQhirseXGhWBtR g5fw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TxGePxN899QZhoK9/ghhqKhjvB6kwBatns9zvkXLNoM=; b=vpsM1lRS7IGolZQrwqjkW/RU7xDwszvfZtnRwXGgrz9DrDmYwNANDA2rW3/OjsOKgz n/5xKMQ525xSYbzMsXatD88d6ufW0FeuKcWmSvScHqpnd+0UW/g+X8dMUz5FLiOZjgms 3k02CnH02K4IGKm6kk7TMOodf5t5ejzWiZyZMKlL2YoPUyPXPUQcT3AxP0JCrUPwIq5t hnY9lwhgtC4KRvGuF45Ax9LNikGINpI2nV4F1GfMCmFZEisW4s4SLnpoCHfusxjvApZj TuKUBqrR6hcFphVhWv+NlXAOSADhu+TLK6HoyACboqrxvq9A3hUSG/q9DLHZgvYF6pD2 WVMg== X-Gm-Message-State: AOAM531p/6vZrI5uZjyBoyelHVLc1QYOpWTCN/VGd/qtXTsozSxqaOs0 xj3cKj7SFzXwRIWo2/B/IkNVLQ== X-Google-Smtp-Source: ABdhPJyVzqXQhaRwLwOwpPjYhG5wgU6VIzcyA+6dFV5TpJfz6U6+N3dreyMpzMgCCcyOwbALtRgaug== X-Received: by 2002:a05:6a00:24d5:b0:50d:eea9:507 with SMTP id d21-20020a056a0024d500b0050deea90507mr28438515pfv.15.1652328741326; Wed, 11 May 2022 21:12:21 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([139.177.225.234]) by smtp.gmail.com with ESMTPSA id q13-20020a170902edcd00b0015e8d4eb2dcsm2695161plk.294.2022.05.11.21.12.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 May 2022 21:12:21 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, akpm@linux-foundation.org, mcgrof@kernel.org, keescook@chromium.org, yzaikin@google.com, osalvador@suse.de, david@redhat.com, masahiroy@kernel.org Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v11 2/4] mm: memory_hotplug: override memmap_on_memory when hugetlb_free_vmemmap=on Date: Thu, 12 May 2022 12:11:40 +0800 Message-Id: <20220512041142.39501-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220512041142.39501-1-songmuchun@bytedance.com> References: <20220512041142.39501-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Optimizing HugeTLB vmemmap pages is not compatible with allocating memmap on hot added memory. If "hugetlb_free_vmemmap=3Don" and memory_hotplug.memmap_on_memory" are both passed on the kernel command line, optimizing hugetlb pages takes precedence. However, the global variable memmap_on_memory will still be set to 1, even though we will not try to allocate memmap on hot added memory. Also introduce mhp_memmap_on_memory() helper to move the definition of "memmap_on_memory" to the scope of CONFIG_MHP_MEMMAP_ON_MEMORY. In the next patch, mhp_memmap_on_memory() will also be exported to be used in hugetlb_vmemmap.c. Signed-off-by: Muchun Song Acked-by: Mike Kravetz --- mm/memory_hotplug.c | 32 ++++++++++++++++++++++++++------ 1 file changed, 26 insertions(+), 6 deletions(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 111684878fd9..a6101ae402f9 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -42,14 +42,36 @@ #include "internal.h" #include "shuffle.h" =20 +#ifdef CONFIG_MHP_MEMMAP_ON_MEMORY +static int memmap_on_memory_set(const char *val, const struct kernel_param= *kp) +{ + if (hugetlb_optimize_vmemmap_enabled()) + return 0; + return param_set_bool(val, kp); +} + +static const struct kernel_param_ops memmap_on_memory_ops =3D { + .flags =3D KERNEL_PARAM_OPS_FL_NOARG, + .set =3D memmap_on_memory_set, + .get =3D param_get_bool, +}; =20 /* * memory_hotplug.memmap_on_memory parameter */ static bool memmap_on_memory __ro_after_init; -#ifdef CONFIG_MHP_MEMMAP_ON_MEMORY -module_param(memmap_on_memory, bool, 0444); +module_param_cb(memmap_on_memory, &memmap_on_memory_ops, &memmap_on_memory= , 0444); MODULE_PARM_DESC(memmap_on_memory, "Enable memmap on memory for memory hot= plug"); + +static inline bool mhp_memmap_on_memory(void) +{ + return memmap_on_memory; +} +#else +static inline bool mhp_memmap_on_memory(void) +{ + return false; +} #endif =20 enum { @@ -1263,9 +1285,7 @@ bool mhp_supports_memmap_on_memory(unsigned long size) * altmap as an alternative source of memory, and we do not exactly * populate a single PMD. */ - return memmap_on_memory && - !hugetlb_optimize_vmemmap_enabled() && - IS_ENABLED(CONFIG_MHP_MEMMAP_ON_MEMORY) && + return mhp_memmap_on_memory() && size =3D=3D memory_block_size_bytes() && IS_ALIGNED(vmemmap_size, PMD_SIZE) && IS_ALIGNED(remaining_size, (pageblock_nr_pages << PAGE_SHIFT)); @@ -2083,7 +2103,7 @@ static int __ref try_remove_memory(u64 start, u64 siz= e) * We only support removing memory added with MHP_MEMMAP_ON_MEMORY in * the same granularity it was added - a single memory block. */ - if (memmap_on_memory) { + if (mhp_memmap_on_memory()) { nr_vmemmap_pages =3D walk_memory_blocks(start, size, NULL, get_nr_vmemmap_pages_cb); if (nr_vmemmap_pages) { --=20 2.11.0 From nobody Fri May 8 03:09:06 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96D86C433EF for ; Thu, 12 May 2022 04:12:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346270AbiELEMj (ORCPT ); Thu, 12 May 2022 00:12:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35358 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346343AbiELEMa (ORCPT ); Thu, 12 May 2022 00:12:30 -0400 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D6C281E6544 for ; Wed, 11 May 2022 21:12:26 -0700 (PDT) Received: by mail-pl1-x636.google.com with SMTP id i1so3747026plg.7 for ; Wed, 11 May 2022 21:12:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=NyzkOphQ+s3WVfI9qZfbl6Rk93jtvh1L9Nek6U2DnhA=; b=p+ZQF3x6iL5RSb1SbGzvsDvcyOzZfgs8FP1VuBOzR3GQ2Bp2cHmaV0567sCajNEuUx YxPYubvCgJDElXCvfVkHCk2SwDdWBOJkDTprhtIn2l9fgF2zKsm5m/YvIZFBeNFBSk5U +f7WLIcOlXVmHcCtj00VV+jYYq/vWXbOFY1Q4BlrfOp61CKzZm7yHHnUVTkmVigBuJKs NUrdzNWcuxKACJ7kUGPYDFOu1sdGbL92heFH6oclrObrYbNezdyR7JHHgbZ8I0NkLgbG l4LgjRw6qyiDIBW7hyxyvhiJ9fiEMfsmxLaJSPqOZNSlzfEOvtTaQnDe7sBKnkPAsfCb qhrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NyzkOphQ+s3WVfI9qZfbl6Rk93jtvh1L9Nek6U2DnhA=; b=jmgh0lDSOmlaCptqbvye59hREQZhDcfX3BTKQKOqYKekPiyuPV0mGAtfiPt5/++XPN NB+qzhbrdii+eriGAmM2OniCsEWFGNBCM/+c+/3WUNcVAWP1akdK0oFynWT20wLX8uxu vaXlrpt8RQ0WsnI2pNEa4z7gbz+4goKRocIC56buwRlqn1nVa6wo4RTl2tWM+PY6dW0x 1qMDfRhNtcDIGbzkp5yvS5YK/56Q0oHqHSoIoCfrXgZKWC5UBzZPnOTOI38UoJA+Vb4j AyD1U8ZYyp01bSokzv5FCRJc4E4ghfzyGmWnWjnJQK0fqPiznndSJn/i7Mr+93je+YAJ q3nQ== X-Gm-Message-State: AOAM532llzAX0rHRU4Dt7UZ52uFy947XCph9wKo4w5LlZ+Vcy39rsnD2 asL509nXh9aud+kWSb/6qF4y+Q== X-Google-Smtp-Source: ABdhPJy2Os6T1eLduAVpNMHAOgfy3Gk48ySBgNZN3RV1s17QOFvQ51tM1HZts0r1JfRgeSgRie8Y0g== X-Received: by 2002:a17:903:32c9:b0:15e:a1b8:c1ef with SMTP id i9-20020a17090332c900b0015ea1b8c1efmr28715136plr.173.1652328746333; Wed, 11 May 2022 21:12:26 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([139.177.225.234]) by smtp.gmail.com with ESMTPSA id q13-20020a170902edcd00b0015e8d4eb2dcsm2695161plk.294.2022.05.11.21.12.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 May 2022 21:12:26 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, akpm@linux-foundation.org, mcgrof@kernel.org, keescook@chromium.org, yzaikin@google.com, osalvador@suse.de, david@redhat.com, masahiroy@kernel.org Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v11 3/4] mm: hugetlb_vmemmap: use kstrtobool for hugetlb_vmemmap param parsing Date: Thu, 12 May 2022 12:11:41 +0800 Message-Id: <20220512041142.39501-4-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220512041142.39501-1-songmuchun@bytedance.com> References: <20220512041142.39501-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Use kstrtobool rather than open coding "on" and "off" parsing in mm/hugetlb_vmemmap.c, which is more powerful to handle all kinds of parameters like 'Yy1Nn0' or [oO][NnFf] for "on" and "off". Signed-off-by: Muchun Song Reviewed-by: Mike Kravetz --- Documentation/admin-guide/kernel-parameters.txt | 6 +++--- mm/hugetlb_vmemmap.c | 10 +++++----- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentatio= n/admin-guide/kernel-parameters.txt index 308da668bbb1..43b8385073ad 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1703,10 +1703,10 @@ enabled. Allows heavy hugetlb users to free up some more memory (7 * PAGE_SIZE for each 2MB hugetlb page). - Format: { on | off (default) } + Format: { [oO][Nn]/Y/y/1 | [oO][Ff]/N/n/0 (default) } =20 - on: enable the feature - off: disable the feature + [oO][Nn]/Y/y/1: enable the feature + [oO][Ff]/N/n/0: disable the feature =20 Built with CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON=3Dy, the default is on. diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 6254bb2d4ae5..cc4ec752ec16 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -28,15 +28,15 @@ EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key); =20 static int __init hugetlb_vmemmap_early_param(char *buf) { - if (!buf) + bool enable; + + if (kstrtobool(buf, &enable)) return -EINVAL; =20 - if (!strcmp(buf, "on")) + if (enable) static_branch_enable(&hugetlb_optimize_vmemmap_key); - else if (!strcmp(buf, "off")) - static_branch_disable(&hugetlb_optimize_vmemmap_key); else - return -EINVAL; + static_branch_disable(&hugetlb_optimize_vmemmap_key); =20 return 0; } --=20 2.11.0 From nobody Fri May 8 03:09:06 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 815D6C433EF for ; Thu, 12 May 2022 04:13:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346311AbiELEM7 (ORCPT ); Thu, 12 May 2022 00:12:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36490 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346361AbiELEMs (ORCPT ); Thu, 12 May 2022 00:12:48 -0400 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 176491EEE1B for ; Wed, 11 May 2022 21:12:32 -0700 (PDT) Received: by mail-pl1-x631.google.com with SMTP id j14so3761319plx.3 for ; Wed, 11 May 2022 21:12:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=M5zSO5rWiQoP1czo3d6Du3Pta/+ji0+lhk14MVp0Qfg=; b=mXVVWgq0C0STaIAhI3bBlCiz+kf9iUg74k3v33XOvO7TizHYmQwr8ctTD+qOZRrRL+ w8OoTi4zebm7wDkTuSkvvuuaIw7jaVpTnjB9GP9+AM4WPFc/sWEuyYW5JQGbJBhzRZ6Q VXUyt2/7Ksvb8oHMVcfD++p7nw2wTsrS1GG6zd5F3xMCVj7Dhi1Kel8suBCs0xanK9oT Syo43WGjkBmLCAc3sEpR/i3PBfAwUPFpUSGyX94lKopauDrCjSTJlhLn1xmnMlv2Rs9c d67eLlQUjlm4gPSM/25pEUV57sa+clbHkh/0HzQfZQTd3XqVu4JAtycFXpRUYvkPg6TV C/PQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=M5zSO5rWiQoP1czo3d6Du3Pta/+ji0+lhk14MVp0Qfg=; b=BS3wQqQILKmpYnqR1S/WRqUH8XXjwPKkSecsMgYNouLoqW5jdDm7i376O8LCYqa+uK DGHlAfUOxfI32r2zAbjVmPItKv3VtAMMiC9oz5bOVYqFJkbXPs6+47HZEz7XtAmxtsEL 2RGiap3L+06nOcfQPSv3pyDiFXL896ayUE+zyjJ5YlKES+aKCR4SVG+ltIvEE0AJ7kY+ qbtumsT5qcHmFhdJY/CMjkFcSEG/t8OeGPnE5Y6NvhHbEbiNl7CMbja5IYgoOxBb9vwq fWarnkRQzWhjUcPj6x6DXsAOminMEpjvna7dmZuRwtg6E/LRNPZL8RC+D/oHb0Y2v2b/ OhCg== X-Gm-Message-State: AOAM532B0CrotFgJxnJYAdEXAKw1pX2Z+b77ZKDY3ikqGYwd8DMxXCDK yaiyrzINDtxQxuEZ3ylXB1gHIA== X-Google-Smtp-Source: ABdhPJyKSX+nHiVHCffZgZnFyD/PDiaiBsbtOIE58g3HNfODQ8Mr3gQo4tFOhzpJ9l88j/6aEs/Quw== X-Received: by 2002:a17:90a:930b:b0:1bf:ac1f:6585 with SMTP id p11-20020a17090a930b00b001bfac1f6585mr8655896pjo.88.1652328751875; Wed, 11 May 2022 21:12:31 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([139.177.225.234]) by smtp.gmail.com with ESMTPSA id q13-20020a170902edcd00b0015e8d4eb2dcsm2695161plk.294.2022.05.11.21.12.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 May 2022 21:12:31 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, akpm@linux-foundation.org, mcgrof@kernel.org, keescook@chromium.org, yzaikin@google.com, osalvador@suse.de, david@redhat.com, masahiroy@kernel.org Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v11 4/4] mm: hugetlb_vmemmap: add hugetlb_optimize_vmemmap sysctl Date: Thu, 12 May 2022 12:11:42 +0800 Message-Id: <20220512041142.39501-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220512041142.39501-1-songmuchun@bytedance.com> References: <20220512041142.39501-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" We must add hugetlb_free_vmemmap=3Don (or "off") to the boot cmdline and reboot the server to enable or disable the feature of optimizing vmemmap pages associated with HugeTLB pages. However, rebooting usually takes a long time. So add a sysctl to enable or disable the feature at runtime without rebooting. Why we need this? There are 3 use cases. 1) The feature of minimizing overhead of struct page associated with each HugeTLB is disabled by default without passing "hugetlb_free_vmemmap=3Don" to the boot cmdline. When we (ByteDance) deliver the servers to the users who want to enable this feature, they have to configure the grub (change boot cmdline) and reboot the servers, whereas rebooting usually takes a long time (we have thousands of servers). It's a very bad experience for the users. So we need a approach to enable this feature after rebooting. This is a use case in our practical environment. 2) Some use cases are that HugeTLB pages are allocated 'on the fly' instead of being pulled from the HugeTLB pool, those workloads would be affected with this feature enabled. Those workloads could be identified by the characteristics of they never explicitly allocating huge pages with 'nr_hugepages' but only set 'nr_overcommit_hugepages' and then let the pages be allocated from the buddy allocator at fault time. We can confirm it is a real use case from the commit 099730d67417. For those workloads, the page fault time could be ~2x slower than before. We suspect those users want to disable this feature if the system has enabled this before and they don't think the memory savings benefit is enough to make up for the performance drop. 3) If the workload which wants vmemmap pages to be optimized and the workload which wants to set 'nr_overcommit_hugepages' and does not want the extera overhead at fault time when the overcommitted pages be allocated from the buddy allocator are deployed in the same server. The user could enable this feature and set 'nr_hugepages' and 'nr_overcommit_hugepages', then disable the feature. In this case, the overcommited HugeTLB pages will not encounter the extra overhead at fault time. Signed-off-by: Muchun Song Reviewed-by: Mike Kravetz --- Documentation/admin-guide/sysctl/vm.rst | 39 ++++++++++++++ include/linux/memory_hotplug.h | 9 ++++ mm/hugetlb_vmemmap.c | 93 +++++++++++++++++++++++++++++= ---- mm/memory_hotplug.c | 7 +-- 4 files changed, 133 insertions(+), 15 deletions(-) diff --git a/Documentation/admin-guide/sysctl/vm.rst b/Documentation/admin-= guide/sysctl/vm.rst index 747e325ebcd0..5c9aa171a0d3 100644 --- a/Documentation/admin-guide/sysctl/vm.rst +++ b/Documentation/admin-guide/sysctl/vm.rst @@ -562,6 +562,45 @@ Change the minimum size of the hugepage pool. See Documentation/admin-guide/mm/hugetlbpage.rst =20 =20 +hugetlb_optimize_vmemmap +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +This knob is not available when memory_hotplug.memmap_on_memory (kernel pa= rameter) +is configured or the size of 'struct page' (a structure defined in +include/linux/mm_types.h) is not power of two (an unusual system config co= uld +result in this). + +Enable (set to 1) or disable (set to 0) the feature of optimizing vmemmap = pages +associated with each HugeTLB page. + +Once enabled, the vmemmap pages of subsequent allocation of HugeTLB pages = from +buddy allocator will be optimized (7 pages per 2MB HugeTLB page and 4095 p= ages +per 1GB HugeTLB page), whereas already allocated HugeTLB pages will not be +optimized. When those optimized HugeTLB pages are freed from the HugeTLB = pool +to the buddy allocator, the vmemmap pages representing that range needs to= be +remapped again and the vmemmap pages discarded earlier need to be rellocat= ed +again. If your use case is that HugeTLB pages are allocated 'on the fly' = (e.g. +never explicitly allocating HugeTLB pages with 'nr_hugepages' but only set +'nr_overcommit_hugepages', those overcommitted HugeTLB pages are allocated= 'on +the fly') instead of being pulled from the HugeTLB pool, you should weigh = the +benefits of memory savings against the more overhead (~2x slower than befo= re) +of allocation or freeing HugeTLB pages between the HugeTLB pool and the bu= ddy +allocator. Another behavior to note is that if the system is under heavy = memory +pressure, it could prevent the user from freeing HugeTLB pages from the Hu= geTLB +pool to the buddy allocator since the allocation of vmemmap pages could be +failed, you have to retry later if your system encounter this situation. + +Once disabled, the vmemmap pages of subsequent allocation of HugeTLB pages= from +buddy allocator will not be optimized meaning the extra overhead at alloca= tion +time from buddy allocator disappears, whereas already optimized HugeTLB pa= ges +will not be affected. If you want to make sure there are no optimized Hug= eTLB +pages, you can set "nr_hugepages" to 0 first and then disable this. Note = that +writing 0 to nr_hugepages will make any "in use" HugeTLB pages become surp= lus +pages. So, those surplus pages are still optimized until they are no long= er +in use. You would need to wait for those surplus pages to be released bef= ore +there are no optimized pages in the system. + + nr_hugepages_mempolicy =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =20 diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index 029fb7e26504..917112661b5c 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -351,4 +351,13 @@ void arch_remove_linear_mapping(u64 start, u64 size); extern bool mhp_supports_memmap_on_memory(unsigned long size); #endif /* CONFIG_MEMORY_HOTPLUG */ =20 +#ifdef CONFIG_MHP_MEMMAP_ON_MEMORY +bool mhp_memmap_on_memory(void); +#else +static inline bool mhp_memmap_on_memory(void) +{ + return false; +} +#endif + #endif /* __LINUX_MEMORY_HOTPLUG_H */ diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index cc4ec752ec16..fcd9f7872064 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -10,6 +10,7 @@ */ #define pr_fmt(fmt) "HugeTLB: " fmt =20 +#include #include "hugetlb_vmemmap.h" =20 /* @@ -22,21 +23,40 @@ #define RESERVE_VMEMMAP_NR 1U #define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) =20 +enum vmemmap_optimize_mode { + VMEMMAP_OPTIMIZE_OFF, + VMEMMAP_OPTIMIZE_ON, +}; + DEFINE_STATIC_KEY_MAYBE(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON, hugetlb_optimize_vmemmap_key); EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key); =20 +static enum vmemmap_optimize_mode vmemmap_optimize_mode =3D + IS_ENABLED(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP_DEFAULT_ON); + +static void vmemmap_optimize_mode_switch(enum vmemmap_optimize_mode to) +{ + if (vmemmap_optimize_mode =3D=3D to) + return; + + if (to =3D=3D VMEMMAP_OPTIMIZE_OFF) + static_branch_dec(&hugetlb_optimize_vmemmap_key); + else + static_branch_inc(&hugetlb_optimize_vmemmap_key); + WRITE_ONCE(vmemmap_optimize_mode, to); +} + static int __init hugetlb_vmemmap_early_param(char *buf) { bool enable; + enum vmemmap_optimize_mode mode; =20 if (kstrtobool(buf, &enable)) return -EINVAL; =20 - if (enable) - static_branch_enable(&hugetlb_optimize_vmemmap_key); - else - static_branch_disable(&hugetlb_optimize_vmemmap_key); + mode =3D enable ? VMEMMAP_OPTIMIZE_ON : VMEMMAP_OPTIMIZE_OFF; + vmemmap_optimize_mode_switch(mode); =20 return 0; } @@ -69,8 +89,10 @@ int hugetlb_vmemmap_alloc(struct hstate *h, struct page = *head) */ ret =3D vmemmap_remap_alloc(vmemmap_addr, vmemmap_end, vmemmap_reuse, GFP_KERNEL | __GFP_NORETRY | __GFP_THISNODE); - if (!ret) + if (!ret) { ClearHPageVmemmapOptimized(head); + static_branch_dec(&hugetlb_optimize_vmemmap_key); + } =20 return ret; } @@ -84,6 +106,11 @@ void hugetlb_vmemmap_free(struct hstate *h, struct page= *head) if (!vmemmap_pages) return; =20 + if (READ_ONCE(vmemmap_optimize_mode) =3D=3D VMEMMAP_OPTIMIZE_OFF) + return; + + static_branch_inc(&hugetlb_optimize_vmemmap_key); + vmemmap_addr +=3D RESERVE_VMEMMAP_SIZE; vmemmap_end =3D vmemmap_addr + (vmemmap_pages << PAGE_SHIFT); vmemmap_reuse =3D vmemmap_addr - PAGE_SIZE; @@ -93,7 +120,9 @@ void hugetlb_vmemmap_free(struct hstate *h, struct page = *head) * to the page which @vmemmap_reuse is mapped to, then free the pages * which the range [@vmemmap_addr, @vmemmap_end] is mapped to. */ - if (!vmemmap_remap_free(vmemmap_addr, vmemmap_end, vmemmap_reuse)) + if (vmemmap_remap_free(vmemmap_addr, vmemmap_end, vmemmap_reuse)) + static_branch_dec(&hugetlb_optimize_vmemmap_key); + else SetHPageVmemmapOptimized(head); } =20 @@ -110,9 +139,6 @@ void __init hugetlb_vmemmap_init(struct hstate *h) BUILD_BUG_ON(__NR_USED_SUBPAGE >=3D RESERVE_VMEMMAP_SIZE / sizeof(struct page)); =20 - if (!hugetlb_optimize_vmemmap_enabled()) - return; - if (!is_power_of_2(sizeof(struct page))) { pr_warn_once("cannot optimize vmemmap pages because \"struct page\" cros= ses page boundaries\n"); static_branch_disable(&hugetlb_optimize_vmemmap_key); @@ -134,3 +160,52 @@ void __init hugetlb_vmemmap_init(struct hstate *h) pr_info("can optimize %d vmemmap pages for %s\n", h->optimize_vmemmap_pages, h->name); } + +#ifdef CONFIG_PROC_SYSCTL +static int hugetlb_optimize_vmemmap_handler(struct ctl_table *table, int w= rite, + void *buffer, size_t *length, + loff_t *ppos) +{ + int ret; + enum vmemmap_optimize_mode mode; + static DEFINE_MUTEX(sysctl_mutex); + + if (write && !capable(CAP_SYS_ADMIN)) + return -EPERM; + + mutex_lock(&sysctl_mutex); + mode =3D vmemmap_optimize_mode; + table->data =3D &mode; + ret =3D proc_dointvec_minmax(table, write, buffer, length, ppos); + if (write && !ret) + vmemmap_optimize_mode_switch(mode); + mutex_unlock(&sysctl_mutex); + + return ret; +} + +static struct ctl_table hugetlb_vmemmap_sysctls[] =3D { + { + .procname =3D "hugetlb_optimize_vmemmap", + .maxlen =3D sizeof(enum vmemmap_optimize_mode), + .mode =3D 0644, + .proc_handler =3D hugetlb_optimize_vmemmap_handler, + .extra1 =3D SYSCTL_ZERO, + .extra2 =3D SYSCTL_ONE, + }, + { } +}; + +static __init int hugetlb_vmemmap_sysctls_init(void) +{ + /* + * If "memory_hotplug.memmap_on_memory" is enabled or "struct page" + * crosses page boundaries, the vmemmap pages cannot be optimized. + */ + if (!mhp_memmap_on_memory() && is_power_of_2(sizeof(struct page))) + register_sysctl_init("vm", hugetlb_vmemmap_sysctls); + + return 0; +} +late_initcall(hugetlb_vmemmap_sysctls_init); +#endif /* CONFIG_PROC_SYSCTL */ diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index a6101ae402f9..c72070cdd055 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -63,15 +63,10 @@ static bool memmap_on_memory __ro_after_init; module_param_cb(memmap_on_memory, &memmap_on_memory_ops, &memmap_on_memory= , 0444); MODULE_PARM_DESC(memmap_on_memory, "Enable memmap on memory for memory hot= plug"); =20 -static inline bool mhp_memmap_on_memory(void) +bool mhp_memmap_on_memory(void) { return memmap_on_memory; } -#else -static inline bool mhp_memmap_on_memory(void) -{ - return false; -} #endif =20 enum { --=20 2.11.0