From nobody Sun May 19 11:06:55 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE0B6C6FD1F for ; Sat, 11 Mar 2023 00:39:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229954AbjCKAjz (ORCPT ); Fri, 10 Mar 2023 19:39:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52230 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229852AbjCKAju (ORCPT ); Fri, 10 Mar 2023 19:39:50 -0500 Received: from mail-qt1-x836.google.com (mail-qt1-x836.google.com [IPv6:2607:f8b0:4864:20::836]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6BD33117231; Fri, 10 Mar 2023 16:39:48 -0800 (PST) Received: by mail-qt1-x836.google.com with SMTP id l18so7765898qtp.1; Fri, 10 Mar 2023 16:39:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1678495187; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=rkrJtBKgdz9IacsmBdIB0+uEq5L3ahKPJNy4SS1btTU=; b=Nf4Q5bnoJ84mo6BVRKtM3OfYWI13uwKOw3uMc7wGcqg/YFrF0EDMb2Tc70a6r1fzZc Vir55wXtNRFzDTHoKksEBE1mT4Whc3rUCB6/a0qYsGVSTiZtZ1TEhI9YVuoSUTQOWpIZ wkyLW5TGMgj2c/+6vz/qhI3xMN7MdJyqYQkUUjkArXHvmslWI6L/Xb589+yt2NqJ62aO kbpLaP4c6yERZu1yQsMLtGpH/zEzYZeuNWWDxxsC3FvHSaQbKpvi5zFCrlyV+6tabjRH vcZbcVon48Fjz/56NthcczU09cWdHbhFtoufMuJ6CShKWnTHB9IBjP7QRZcPwiJtc50q dKRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678495187; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rkrJtBKgdz9IacsmBdIB0+uEq5L3ahKPJNy4SS1btTU=; b=y5cQzyzpAV1EQEau6I68uVvLSS0vdf4wVG4lpLxVfnJP3v1Auc7ler5dL0mUZFwQmC ApgPfzi4chvx8FV1kBtn9SNkAcw4KvnL91zHAG+6zkdum3XgA0lLwU7Qwtvmf01ZYQHK PyqYJGy5yQtBOhmx3c8CP5j8yzb86MMM2ejGlAwwHIgnwIy23vJ7IIYpEH15NOl93FEu HB8RPKBgMfk0EEHhGaU0k9v2PsljQyRqqHUOKA465IJXj9unN0zr6Pr0Wvsul56nP2Zl I92o4MC0rq95WLnoKByAbIjmb8OUQkeSVNN/aXjeYnQ4Zu/kAtR/6WqLLoys2HdHMgO2 EcxA== X-Gm-Message-State: AO0yUKUwI32qxvpqvfx/vTaaicphH9vqRm1Goi+yB0xy8cQVjjSerT2U 2vb4bzH7kf9vdozGpdG5syE= X-Google-Smtp-Source: AK7set/Bpp6yGdrFeMDx50dFU90Wr4xltqSMpKJQRscRiht85efTCEQ33g7uJ6Dpxtrzuwm9BJqBkA== X-Received: by 2002:a05:622a:244:b0:3bd:1bbc:8d8e with SMTP id c4-20020a05622a024400b003bd1bbc8d8emr10459853qtx.0.1678495187484; Fri, 10 Mar 2023 16:39:47 -0800 (PST) Received: from stbirv-lnx-1.igp.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id a5-20020ac84345000000b003bfaff2a6b9sm868874qtn.10.2023.03.10.16.39.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 10 Mar 2023 16:39:47 -0800 (PST) From: Doug Berger To: Andrew Morton Cc: Jonathan Corbet , Mike Rapoport , Borislav Petkov , "Paul E. McKenney" , Randy Dunlap , Neeraj Upadhyay , Damien Le Moal , Kim Phillips , "Steven Rostedt (Google)" , Michal Hocko , Johannes Weiner , Vlastimil Babka , KOSAKI Motohiro , Mel Gorman , Muchun Song , Mike Kravetz , Florian Fainelli , David Hildenbrand , Oscar Salvador , Joonsoo Kim , Sukadev Bhattiprolu , Rik van Riel , Roman Gushchin , Minchan Kim , Chris Goldsworthy , "Georgi Djakov" , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Doug Berger Subject: [PATCH v4 1/9] lib/show_mem.c: display MovableOnly Date: Fri, 10 Mar 2023 16:38:47 -0800 Message-Id: <20230311003855.645684-2-opendmb@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230311003855.645684-1-opendmb@gmail.com> References: <20230311003855.645684-1-opendmb@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The comment for commit c78e93630d15 ("mm: do not walk all of system memory during show_mem") indicates it "also corrects the reporting of HighMem as HighMem/MovableOnly as ZONE_MOVABLE has similar problems to HighMem with respect to lowmem/highmem exhaustion." Presuming the similar problems are with regard to the general exclusion of kernel allocations from either zone, I believe it makes sense to include all ZONE_MOVABLE memory even on systems without HighMem. To the extent that this was the intent of the original commit I have included a "Fixes" tag, but it seems unnecessary to submit to linux-stable. Fixes: c78e93630d15 ("mm: do not walk all of system memory during show_mem") Signed-off-by: Doug Berger --- lib/show_mem.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/show_mem.c b/lib/show_mem.c index 0d7585cde2a6..6a632b0c35c5 100644 --- a/lib/show_mem.c +++ b/lib/show_mem.c @@ -27,7 +27,7 @@ void __show_mem(unsigned int filter, nodemask_t *nodemask= , int max_zone_idx) total +=3D zone->present_pages; reserved +=3D zone->present_pages - zone_managed_pages(zone); =20 - if (is_highmem_idx(zoneid)) + if (zoneid =3D=3D ZONE_MOVABLE || is_highmem_idx(zoneid)) highmem +=3D zone->present_pages; } } --=20 2.34.1 From nobody Sun May 19 11:06:55 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46CF3C6FD1F for ; Sat, 11 Mar 2023 00:40:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230075AbjCKAkB (ORCPT ); Fri, 10 Mar 2023 19:40:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52314 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229919AbjCKAjw (ORCPT ); Fri, 10 Mar 2023 19:39:52 -0500 Received: from mail-qt1-x82b.google.com (mail-qt1-x82b.google.com [IPv6:2607:f8b0:4864:20::82b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E75F711F61B; Fri, 10 Mar 2023 16:39:51 -0800 (PST) Received: by mail-qt1-x82b.google.com with SMTP id s12so7688809qtq.11; Fri, 10 Mar 2023 16:39:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1678495191; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=21XWFqZKFhrw7IqRe7YpT4TCzw//2tpfFn1az0nze7A=; b=H0fDFIKrE+i4ootQK/33BZNQZIOFe5T2gXr49E41EzK7+v+nPMZu5ce7GQgvufox1r xofMSL1BtQS45kz/SyzPCOerPL6qsYRi0a6twooP6fAdZu2ZRX08kGWmrLZ7Z0zZSTZP q+UEKBCQTwBEJR6r7gXIwUeRd0wTaMWuxQ3AU3IKjs4i6DE3h4QiwCXwnyrg4hJfR9Fa OqS3WXoZcjVB0XhOmr7mqX1611c4J16sjcN1h+t7gR00matvjm9r+0NLXtlYAE//yMF/ M0fUUbTDdeXlerycqwdx/xfIlphKCqHxxm1rGe7SLvvTxgx6cxy15SvFrAbj2TA/Q5VH y60A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678495191; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=21XWFqZKFhrw7IqRe7YpT4TCzw//2tpfFn1az0nze7A=; b=B7V3CrQcSemazJN7jDm84LMKUc3/gAAK/Oc2xIi33syeFgJCvrnhxx5Rgs9ZIlBCD1 Xt/gzA94w6PBN8z9x6DbMg9RRNeg7qr42Ead6FoFd6s6lh9Exh67jHRIcfpkqBidrQL/ xFxPjntuGYp/RJ+fmC8Iq4FZ27QNC+B/Shm9iDB5Gg8c+tuc29A6jMZyqXaydLqj52Eu 2fZDq6EKynrLDiPJMsA6uJPmFy+hPRo66/KHz/I1w8gEj7Y3ic6ubPDff1wYBd8QcWsB XIJksaAIL8FoHQ4YsGclVXDFnGCxxTLOUAbEfLmjHtSIC8VoWSTVbZ6KKsNCr9jD82E8 5FNQ== X-Gm-Message-State: AO0yUKX8dT8woiYyit7f1wLQgIF0KX5BTQNcjjjedSTAap/ekZREF13z MKLxs5KiR6YBqWDnOTP+LvU= X-Google-Smtp-Source: AK7set+uqKakBULmDoJ5Xr6lerX7tivt8yOC0pEZoZHG6hVLZ77mTu5fc8Qx1J0t8jZ7dQbCVw6tHQ== X-Received: by 2002:ac8:5ad5:0:b0:3bf:a3fc:c70a with SMTP id d21-20020ac85ad5000000b003bfa3fcc70amr46750078qtd.28.1678495191044; Fri, 10 Mar 2023 16:39:51 -0800 (PST) Received: from stbirv-lnx-1.igp.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id a5-20020ac84345000000b003bfaff2a6b9sm868874qtn.10.2023.03.10.16.39.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 10 Mar 2023 16:39:50 -0800 (PST) From: Doug Berger To: Andrew Morton Cc: Jonathan Corbet , Mike Rapoport , Borislav Petkov , "Paul E. McKenney" , Randy Dunlap , Neeraj Upadhyay , Damien Le Moal , Kim Phillips , "Steven Rostedt (Google)" , Michal Hocko , Johannes Weiner , Vlastimil Babka , KOSAKI Motohiro , Mel Gorman , Muchun Song , Mike Kravetz , Florian Fainelli , David Hildenbrand , Oscar Salvador , Joonsoo Kim , Sukadev Bhattiprolu , Rik van Riel , Roman Gushchin , Minchan Kim , Chris Goldsworthy , "Georgi Djakov" , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Doug Berger Subject: [PATCH v4 2/9] mm/page_alloc: calculate node_spanned_pages from pfns Date: Fri, 10 Mar 2023 16:38:48 -0800 Message-Id: <20230311003855.645684-3-opendmb@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230311003855.645684-1-opendmb@gmail.com> References: <20230311003855.645684-1-opendmb@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Since the start and end pfns of the node are passed as arguments to calculate_node_totalpages() they might as well be used to specify the node_spanned_pages value for the node rather than accumulating the spans of member zones. This prevents the need for additional adjustments if zones are allowed to overlap. The realtotalpages name is reverted to just totalpages to reduce the burden of supporting multiple realities. Signed-off-by: Doug Berger --- mm/page_alloc.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index ac1fc986af44..b1952f86ab6d 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -7586,7 +7586,7 @@ static void __init calculate_node_totalpages(struct p= glist_data *pgdat, unsigned long node_start_pfn, unsigned long node_end_pfn) { - unsigned long realtotalpages =3D 0, totalpages =3D 0; + unsigned long totalpages =3D 0; enum zone_type i; =20 for (i =3D 0; i < MAX_NR_ZONES; i++) { @@ -7617,13 +7617,12 @@ static void __init calculate_node_totalpages(struct= pglist_data *pgdat, zone->present_early_pages =3D real_size; #endif =20 - totalpages +=3D size; - realtotalpages +=3D real_size; + totalpages +=3D real_size; } =20 - pgdat->node_spanned_pages =3D totalpages; - pgdat->node_present_pages =3D realtotalpages; - pr_debug("On node %d totalpages: %lu\n", pgdat->node_id, realtotalpages); + pgdat->node_spanned_pages =3D node_end_pfn - node_start_pfn; + pgdat->node_present_pages =3D totalpages; + pr_debug("On node %d totalpages: %lu\n", pgdat->node_id, totalpages); } =20 #ifndef CONFIG_SPARSEMEM --=20 2.34.1 From nobody Sun May 19 11:06:55 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA054C6FD1F for ; Sat, 11 Mar 2023 00:40:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230146AbjCKAkE (ORCPT ); Fri, 10 Mar 2023 19:40:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52560 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229895AbjCKAj6 (ORCPT ); Fri, 10 Mar 2023 19:39:58 -0500 Received: from mail-qt1-x835.google.com (mail-qt1-x835.google.com [IPv6:2607:f8b0:4864:20::835]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 947421340E6; Fri, 10 Mar 2023 16:39:55 -0800 (PST) Received: by mail-qt1-x835.google.com with SMTP id cf14so7703799qtb.10; Fri, 10 Mar 2023 16:39:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1678495194; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1BQ3Tu4Fh7Zw0bEuz451Bf6KjfFz+vAaaszEEJUUChw=; b=ACBZmz9T+fOJ/er2MQetIFMNgeRLKnusO2uR98YFrcykpO+2zcEzh45d0mpQn8L8jZ cdR/zhSiqUdc+7AlJATmoMf24jV3C/a2RsALpIVpVApX7OZzvi2A5rlaFFLOoM+4a+Iu cmlDmVB6VWydoOGSTEEfkLhgMkdb8XzoDjcQk2fg+HQnJYwd+s4LvbFXcehweSegHqMN qGoJLqVGKKRZcSfMSqbYZaOJYXZtrhVIS58QLZE1cpB/NF+nc7muaTC4uG6RsPAN/amf FfNftp8lhVc0lAl1X0hPzYaMUxdq1qzm2P450TpVFNj2Fc9ZAuorzU1Nl1iW7OagYS9k XYQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678495194; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1BQ3Tu4Fh7Zw0bEuz451Bf6KjfFz+vAaaszEEJUUChw=; b=hkCigIN8a7E1jIrzfSTw4gxlSIIHwwSLoqm3cONJV0BiAY1J74giI3hwGyH6lAljCg syFBa5Oxy9369XpCQoBDw8JCir2fWwJPCz0+S9J/KmK/7RxxdbnzmStNT5+9/GunKZKr kMdoetvkKK75jmKePj3WXFwILqmxsnqswIS9MBh6sAUzoHHcOPYCIe78bwuNvVm55swF aNz75PvVWr9N3Qp+aI36jye1lKdlO3DdlJ+SpYDE7T9eENZValrt6vfF4rjxSpbFzo+5 nUxe3sNOiDdAUyEC3M11cM9HV6USyAna7GZHMpzz1SEllSv3Yt0+u2fWi/xaCOxM+V/8 k9gA== X-Gm-Message-State: AO0yUKWAfKemK8hkmgBhpBZrerIQ1pEN/9zmJDqt5vjobmCwafUIiGP+ FSmf413U7X/8dGJRfsNAX+Y= X-Google-Smtp-Source: AK7set+MmuySHwc8FcDbJGZpaVLU4HCrie7tiN1VA0TeZDeCtzqts9IA4rg4ydaSqQ297OiyWnJ7pQ== X-Received: by 2002:ac8:5a88:0:b0:3bf:d8ec:a9fc with SMTP id c8-20020ac85a88000000b003bfd8eca9fcmr11747709qtc.52.1678495194581; Fri, 10 Mar 2023 16:39:54 -0800 (PST) Received: from stbirv-lnx-1.igp.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id a5-20020ac84345000000b003bfaff2a6b9sm868874qtn.10.2023.03.10.16.39.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 10 Mar 2023 16:39:54 -0800 (PST) From: Doug Berger To: Andrew Morton Cc: Jonathan Corbet , Mike Rapoport , Borislav Petkov , "Paul E. McKenney" , Randy Dunlap , Neeraj Upadhyay , Damien Le Moal , Kim Phillips , "Steven Rostedt (Google)" , Michal Hocko , Johannes Weiner , Vlastimil Babka , KOSAKI Motohiro , Mel Gorman , Muchun Song , Mike Kravetz , Florian Fainelli , David Hildenbrand , Oscar Salvador , Joonsoo Kim , Sukadev Bhattiprolu , Rik van Riel , Roman Gushchin , Minchan Kim , Chris Goldsworthy , "Georgi Djakov" , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Doug Berger Subject: [PATCH v4 3/9] mm/page_alloc: prevent creation of empty zones Date: Fri, 10 Mar 2023 16:38:49 -0800 Message-Id: <20230311003855.645684-4-opendmb@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230311003855.645684-1-opendmb@gmail.com> References: <20230311003855.645684-1-opendmb@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" If none of the pages a zone spans are present then its start pfn and span should be zeroed to prevent initialization. This prevents the creation of an empty zone if all of its pages are moved to a zone that would overlap it. The real_size name is reverted to just size to reduce the burden of supporting multiple realities. Signed-off-by: Doug Berger --- mm/page_alloc.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index b1952f86ab6d..827b4bfef625 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -7592,8 +7592,7 @@ static void __init calculate_node_totalpages(struct p= glist_data *pgdat, for (i =3D 0; i < MAX_NR_ZONES; i++) { struct zone *zone =3D pgdat->node_zones + i; unsigned long zone_start_pfn, zone_end_pfn; - unsigned long spanned, absent; - unsigned long size, real_size; + unsigned long spanned, absent, size; =20 spanned =3D zone_spanned_pages_in_node(pgdat->node_id, i, node_start_pfn, @@ -7604,20 +7603,21 @@ static void __init calculate_node_totalpages(struct= pglist_data *pgdat, node_start_pfn, node_end_pfn); =20 - size =3D spanned; - real_size =3D size - absent; + size =3D spanned - absent; =20 - if (size) + if (size) { zone->zone_start_pfn =3D zone_start_pfn; - else + } else { + spanned =3D 0; zone->zone_start_pfn =3D 0; - zone->spanned_pages =3D size; - zone->present_pages =3D real_size; + } + zone->spanned_pages =3D spanned; + zone->present_pages =3D size; #if defined(CONFIG_MEMORY_HOTPLUG) - zone->present_early_pages =3D real_size; + zone->present_early_pages =3D size; #endif =20 - totalpages +=3D real_size; + totalpages +=3D size; } =20 pgdat->node_spanned_pages =3D node_end_pfn - node_start_pfn; --=20 2.34.1 From nobody Sun May 19 11:06:55 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A19C9C6FA99 for ; Sat, 11 Mar 2023 00:40:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230190AbjCKAkJ (ORCPT ); Fri, 10 Mar 2023 19:40:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52574 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230129AbjCKAkB (ORCPT ); Fri, 10 Mar 2023 19:40:01 -0500 Received: from mail-qv1-xf2b.google.com (mail-qv1-xf2b.google.com [IPv6:2607:f8b0:4864:20::f2b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 20A11136D33; Fri, 10 Mar 2023 16:39:59 -0800 (PST) Received: by mail-qv1-xf2b.google.com with SMTP id ne1so4751582qvb.9; Fri, 10 Mar 2023 16:39:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1678495198; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=YS/BwpEuHX62q7q4jA6kz2JjH38ryXLVzKB9Q24XSVc=; b=VF08OkgdvcCe08OFIcKq9AjNHxoLhzwun0cGIC1Dw+CHTXLxKUFwhYo6TUZdd/cWK3 9WanLdPXxSvDlWj9fqkxau3KxElLZQ+s/76QZ3bxEmG1O+nywfnvT/hiRGlxMPH/ohYJ FAs5zCI3FPjJgsi5FN6DhqLP9JB7Lpvd8rZj/E499x6+NukAtPbth8cSZITIIevW4iJI mpXPc4PbQsDfArMwZv6d+gaXpvtIkJexpGmEU7Bh54jzSPLoRC8gA+96ufZ6AQFCWzAo aZF9vOT02zcYSfbZLxIu9rZCY/Z3v8Xnipykl3H10Lw3vvS3h3NOz85H7iNEhWMQD/d4 eWTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678495198; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YS/BwpEuHX62q7q4jA6kz2JjH38ryXLVzKB9Q24XSVc=; b=Y9NA/rPZgrARI7cdRC7L5LBKkZ7PWGECLQ7VGfiNg4tkD2VmYUEal0ba3zXldR2xQF +MMq8OYliJmixWt3p8XMKLXhQ/CqD4nZ5tspRmq7snmSUKXSdR34ACiGVOmlcW3XDtws XVQ+Gn6rvjdEKLoehyqvO2ddiIVP90NFzR+yZqYPjAmdrgfFs1vsi38HhwssQHJ2BUmZ /8S0mNsJe6jORkRfFvsi6UCNt4lrgiU5Lh6lGnh84tRPceRQaZo4tsQTqL6IvantLepc jB/ixGwi6Ybc0FFgWsp5Qkk8Y5rnPvr5uQYuIFcCZ2bsXYCZcr5mnb7RAHc1iemhwePq 9A4Q== X-Gm-Message-State: AO0yUKWPjzqIEIkZhdO649A5R7jISKwVfhpQ8JEfuA/OzzDezUvUr84f /kKaOA+RA5l4YnlaTkGRUPw= X-Google-Smtp-Source: AK7set/7lbmUehkOoWsoZelfJbwaie1c7mWXavPmCpQDq6lJJf0+Kbe53vrmeRB0vmNfXKniiuy2zg== X-Received: by 2002:ad4:5aee:0:b0:57e:56f5:5410 with SMTP id c14-20020ad45aee000000b0057e56f55410mr1647846qvh.39.1678495198141; Fri, 10 Mar 2023 16:39:58 -0800 (PST) Received: from stbirv-lnx-1.igp.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id a5-20020ac84345000000b003bfaff2a6b9sm868874qtn.10.2023.03.10.16.39.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 10 Mar 2023 16:39:57 -0800 (PST) From: Doug Berger To: Andrew Morton Cc: Jonathan Corbet , Mike Rapoport , Borislav Petkov , "Paul E. McKenney" , Randy Dunlap , Neeraj Upadhyay , Damien Le Moal , Kim Phillips , "Steven Rostedt (Google)" , Michal Hocko , Johannes Weiner , Vlastimil Babka , KOSAKI Motohiro , Mel Gorman , Muchun Song , Mike Kravetz , Florian Fainelli , David Hildenbrand , Oscar Salvador , Joonsoo Kim , Sukadev Bhattiprolu , Rik van Riel , Roman Gushchin , Minchan Kim , Chris Goldsworthy , "Georgi Djakov" , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Doug Berger Subject: [PATCH v4 4/9] mm/page_alloc.c: allow oversized movablecore Date: Fri, 10 Mar 2023 16:38:50 -0800 Message-Id: <20230311003855.645684-5-opendmb@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230311003855.645684-1-opendmb@gmail.com> References: <20230311003855.645684-1-opendmb@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Now that the error in computation of corepages has been corrected by commit 9fd745d450e7 ("mm: fix overflow in find_zone_movable_pfns_for_nodes()"), oversized specifications of movablecore will result in a zero value for required_kernelcore if it is not also specified. It is unintuitive for such a request to lead to no ZONE_MOVABLE memory when the kernel parameters are clearly requesting some. The current behavior when requesting an oversized kernelcore is to classify all of the pages in movable_zone as kernelcore. The new behavior when requesting an oversized movablecore (when not also specifying kernelcore) is to similarly classify all of the pages in movable_zone as movablecore. Signed-off-by: Doug Berger --- mm/page_alloc.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 827b4bfef625..e574c6a79e2f 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8166,13 +8166,13 @@ static void __init find_zone_movable_pfns_for_nodes= (void) corepages =3D totalpages - required_movablecore; =20 required_kernelcore =3D max(required_kernelcore, corepages); + } else if (!required_kernelcore) { + /* If kernelcore was not specified, there is no ZONE_MOVABLE */ + goto out; } =20 - /* - * If kernelcore was not specified or kernelcore size is larger - * than totalpages, there is no ZONE_MOVABLE. - */ - if (!required_kernelcore || required_kernelcore >=3D totalpages) + /* If kernelcore size exceeds totalpages, there is no ZONE_MOVABLE */ + if (required_kernelcore >=3D totalpages) goto out; =20 /* usable_startpfn is the lowest possible pfn ZONE_MOVABLE can be at */ --=20 2.34.1 From nobody Sun May 19 11:06:55 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26DBBC6FD19 for ; Sat, 11 Mar 2023 00:40:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230237AbjCKAkZ (ORCPT ); Fri, 10 Mar 2023 19:40:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53328 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230195AbjCKAkO (ORCPT ); Fri, 10 Mar 2023 19:40:14 -0500 Received: from mail-qv1-xf2f.google.com (mail-qv1-xf2f.google.com [IPv6:2607:f8b0:4864:20::f2f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A17361378B9; Fri, 10 Mar 2023 16:40:02 -0800 (PST) Received: by mail-qv1-xf2f.google.com with SMTP id g9so4757891qvt.8; Fri, 10 Mar 2023 16:40:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1678495201; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=i8Ej0xERH/saZ/pDi1cj12vURZcx4JMPO5LgIvCru00=; b=QuGf7CPVoKCsHOOIjm2NK5cjpLghKA1j7kJ/1K741bb8Tjt2zne1lcBoQKG5RZFaLH d+9iDxF3djo9jXBNdij0VGTP6ZY1SP8dGV2Shkve2mUoxGpnaN76uokoWkwI7LKRvxaz KF+uFAAEj6TJzhXi7bf5vxZsTi5W+CbUoaBE2wno33o87jXYbrldRxk4635GHPh9JKm9 VfG5OUzjlwFRg/KWjFtjZSd0w2xGePXLgufWztrOsXM8yCuJbytBX4lGLAtjafBKNG5q fVvaqQHAnJwIYiTQug11IA0ltwfKKzu0TeoRULLmWlo061rrvJL/iDxv+7SOUACOFXta PEiw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678495202; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=i8Ej0xERH/saZ/pDi1cj12vURZcx4JMPO5LgIvCru00=; b=QNI3yBm3NeatOulxg+jyt4vnxwD/nk/PAgiQJEj65R4JSrP1DrG0DwqU113RieXXX5 PL0AHbl8K1k6epNq/uArZPb6ITPixjLJCAK9CiBTkPw/3+9tYWst+GA3DHulgFrsrAt2 CF4XjXMdqA8+r7NhMU6gUTAIrAQCQ9GIIF73WCRE3mYXe7uPNA1OwjSQZ+XIpHsDsVTF EDqcEq8XCKR8EYrVY5g7ByVXeXNqI+WlAsf7OSw0MA4AdSxAAkF1rAPNTtPL1ZmmTbhl ikraX2/uEkZhncQBbdaNEWQR33uEataRmKoPdtmIttmMA5JcgGkUCKYuSCKQI7Uhk1XS lcRA== X-Gm-Message-State: AO0yUKV+dRMVbVlw57E4B5qFd9vceWWH7+ZcHeaHsw67QfH5ru+gtwrm W1k/r/an/5TWwBFzHYMgzb4= X-Google-Smtp-Source: AK7set/lVXoZ5W4bIAwoTLxlwAsTqWdy1gsFvdTIDfjsCCmW6gc1QRgTGtZT2GQJIiWjAJ6Nsll19w== X-Received: by 2002:a05:6214:124b:b0:5a1:d44:79a7 with SMTP id r11-20020a056214124b00b005a10d4479a7mr1926784qvv.20.1678495201744; Fri, 10 Mar 2023 16:40:01 -0800 (PST) Received: from stbirv-lnx-1.igp.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id a5-20020ac84345000000b003bfaff2a6b9sm868874qtn.10.2023.03.10.16.39.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 10 Mar 2023 16:40:01 -0800 (PST) From: Doug Berger To: Andrew Morton Cc: Jonathan Corbet , Mike Rapoport , Borislav Petkov , "Paul E. McKenney" , Randy Dunlap , Neeraj Upadhyay , Damien Le Moal , Kim Phillips , "Steven Rostedt (Google)" , Michal Hocko , Johannes Weiner , Vlastimil Babka , KOSAKI Motohiro , Mel Gorman , Muchun Song , Mike Kravetz , Florian Fainelli , David Hildenbrand , Oscar Salvador , Joonsoo Kim , Sukadev Bhattiprolu , Rik van Riel , Roman Gushchin , Minchan Kim , Chris Goldsworthy , "Georgi Djakov" , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Doug Berger Subject: [PATCH v4 5/9] mm/page_alloc: introduce init_reserved_pageblock() Date: Fri, 10 Mar 2023 16:38:51 -0800 Message-Id: <20230311003855.645684-6-opendmb@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230311003855.645684-1-opendmb@gmail.com> References: <20230311003855.645684-1-opendmb@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Most of the implementation of init_cma_reserved_pageblock() is common to the initialization of any reserved pageblock for use by the page allocator. This commit breaks that functionality out into the new common function init_reserved_pageblock() for use by code other than CMA. The CMA specific code is relocated from page_alloc to the point where init_cma_reserved_pageblock() was invoked and the new function is used there instead. The error path is also updated to use the function to operate on pageblocks rather than pages. Signed-off-by: Doug Berger --- include/linux/gfp.h | 5 +---- mm/cma.c | 15 +++++++++++---- mm/page_alloc.c | 8 ++------ 3 files changed, 14 insertions(+), 14 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 65a78773dcca..a7892b3c436b 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -361,9 +361,6 @@ extern struct page *alloc_contig_pages(unsigned long nr= _pages, gfp_t gfp_mask, #endif void free_contig_range(unsigned long pfn, unsigned long nr_pages); =20 -#ifdef CONFIG_CMA -/* CMA stuff */ -extern void init_cma_reserved_pageblock(struct page *page); -#endif +extern void init_reserved_pageblock(struct page *page); =20 #endif /* __LINUX_GFP_H */ diff --git a/mm/cma.c b/mm/cma.c index a7263aa02c92..cc462df68781 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -31,6 +31,7 @@ #include #include #include +#include #include =20 #include "cma.h" @@ -116,8 +117,13 @@ static void __init cma_activate_area(struct cma *cma) } =20 for (pfn =3D base_pfn; pfn < base_pfn + cma->count; - pfn +=3D pageblock_nr_pages) - init_cma_reserved_pageblock(pfn_to_page(pfn)); + pfn +=3D pageblock_nr_pages) { + struct page *page =3D pfn_to_page(pfn); + + set_pageblock_migratetype(page, MIGRATE_CMA); + init_reserved_pageblock(page); + page_zone(page)->cma_pages +=3D pageblock_nr_pages; + } =20 spin_lock_init(&cma->lock); =20 @@ -133,8 +139,9 @@ static void __init cma_activate_area(struct cma *cma) out_error: /* Expose all pages to the buddy, they are useless for CMA. */ if (!cma->reserve_pages_on_error) { - for (pfn =3D base_pfn; pfn < base_pfn + cma->count; pfn++) - free_reserved_page(pfn_to_page(pfn)); + for (pfn =3D base_pfn; pfn < base_pfn + cma->count; + pfn +=3D pageblock_nr_pages) + init_reserved_pageblock(pfn_to_page(pfn)); } totalcma_pages -=3D cma->count; cma->count =3D 0; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e574c6a79e2f..da1af678995b 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2308,9 +2308,8 @@ void __init page_alloc_init_late(void) set_zone_contiguous(zone); } =20 -#ifdef CONFIG_CMA -/* Free whole pageblock and set its migration type to MIGRATE_CMA. */ -void __init init_cma_reserved_pageblock(struct page *page) +/* Free whole pageblock */ +void __init init_reserved_pageblock(struct page *page) { unsigned i =3D pageblock_nr_pages; struct page *p =3D page; @@ -2320,14 +2319,11 @@ void __init init_cma_reserved_pageblock(struct page= *page) set_page_count(p, 0); } while (++p, --i); =20 - set_pageblock_migratetype(page, MIGRATE_CMA); set_page_refcounted(page); __free_pages(page, pageblock_order); =20 adjust_managed_page_count(page, pageblock_nr_pages); - page_zone(page)->cma_pages +=3D pageblock_nr_pages; } -#endif =20 /* * The order of subdivision here is critical for the IO subsystem. --=20 2.34.1 From nobody Sun May 19 11:06:55 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D68BFC6FD19 for ; Sat, 11 Mar 2023 00:40:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230221AbjCKAkh (ORCPT ); Fri, 10 Mar 2023 19:40:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53010 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230281AbjCKAkU (ORCPT ); Fri, 10 Mar 2023 19:40:20 -0500 Received: from mail-qt1-x833.google.com (mail-qt1-x833.google.com [IPv6:2607:f8b0:4864:20::833]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 71ACA139D3F; Fri, 10 Mar 2023 16:40:06 -0800 (PST) Received: by mail-qt1-x833.google.com with SMTP id l13so7747881qtv.3; Fri, 10 Mar 2023 16:40:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1678495205; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=K0kuuT3+6RnS9K3QghRnceBdk8ynqQ1z+mh1LJkGVzo=; b=gYseOYKWDMGUISiNkRcjSjvTTZsw5ZpxUtUzYky9ZIu/8x2vFtu1Ubj2/1NfS4ntqc hX/h/yEUyA2imIDBGlqwTwoMvCjauAQN3YL66devi+3PsmAyF6mABKzTGpk4hMd7icxM YEG/1j8Vk1YoVpNOoeMrCgh/Uo/Rnn3iQf+MpFu+gZkimrhV0E2qnQdFooilVD+K/hHa rndAxQ/V0FYCu+PEVJ8K+hWV4ROJd99Mv7LFTpRov8IlSAc9cA6LofsPSnb+8TjoRVr2 L5hPzI4NrF/E1TKlZrGIJ2bO/Ipsdij7LIspIYw/JuzSHIXAlX+sgOjl322atMjd1034 P2GQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678495205; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=K0kuuT3+6RnS9K3QghRnceBdk8ynqQ1z+mh1LJkGVzo=; b=WhVlndF5Bb4OF7zrC0OZQ3hQ92G6RMhm6JunaDo1YU81XrU41IxDquIEbfpXtbTJul Ojim18CbCSWswyP2qfRPClo1Eakt+y6fO2C/eK+TKb2f2sDYPRgxrOXpCnNtiDVPjHRf eNTABcX8cK9HIbudURmOn+jzik3o3jioO6tIgWevKmdEk8ZLsVQQoR/ESZj4AuPs2y7F kPLNiTLDVaOmRm9LKWLMfaqwhORShC8b0zi0qe4E2DY7gF7AOWYVNYJdRsznVS9LzWXC rKDWEwdN4RSIC38bnuSucty/2Ni3W1FIxQwUKnvOHrNYJTG9x6wth/G5GvmzMg8ulNbL o3bA== X-Gm-Message-State: AO0yUKXmv0tRq5J5L5owNNn7SzYIBJjz282U80sXaDlSGw+yctR57PXF HHl9MzhRQhdcZUCu3B8/j0s= X-Google-Smtp-Source: AK7set8ZGDP7wAjGGI9tJbICfiwjsQBTns3pou3iyFBkgtdpeY6NM2VuirqgiTSIHG6uE67a1R8tZQ== X-Received: by 2002:ac8:57cd:0:b0:3bf:d9a9:25f7 with SMTP id w13-20020ac857cd000000b003bfd9a925f7mr12075430qta.6.1678495205288; Fri, 10 Mar 2023 16:40:05 -0800 (PST) Received: from stbirv-lnx-1.igp.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id a5-20020ac84345000000b003bfaff2a6b9sm868874qtn.10.2023.03.10.16.40.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 10 Mar 2023 16:40:04 -0800 (PST) From: Doug Berger To: Andrew Morton Cc: Jonathan Corbet , Mike Rapoport , Borislav Petkov , "Paul E. McKenney" , Randy Dunlap , Neeraj Upadhyay , Damien Le Moal , Kim Phillips , "Steven Rostedt (Google)" , Michal Hocko , Johannes Weiner , Vlastimil Babka , KOSAKI Motohiro , Mel Gorman , Muchun Song , Mike Kravetz , Florian Fainelli , David Hildenbrand , Oscar Salvador , Joonsoo Kim , Sukadev Bhattiprolu , Rik van Riel , Roman Gushchin , Minchan Kim , Chris Goldsworthy , "Georgi Djakov" , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Doug Berger Subject: [PATCH v4 6/9] memblock: introduce MEMBLOCK_MOVABLE flag Date: Fri, 10 Mar 2023 16:38:52 -0800 Message-Id: <20230311003855.645684-7-opendmb@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230311003855.645684-1-opendmb@gmail.com> References: <20230311003855.645684-1-opendmb@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The MEMBLOCK_MOVABLE flag is introduced to designate a memblock as only supporting movable allocations by the page allocator. Signed-off-by: Doug Berger --- include/linux/memblock.h | 8 ++++++++ mm/memblock.c | 24 ++++++++++++++++++++++++ 2 files changed, 32 insertions(+) diff --git a/include/linux/memblock.h b/include/linux/memblock.h index 50ad19662a32..8eb3ca32dfa7 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -47,6 +47,7 @@ enum memblock_flags { MEMBLOCK_MIRROR =3D 0x2, /* mirrored region */ MEMBLOCK_NOMAP =3D 0x4, /* don't add to kernel direct mapping */ MEMBLOCK_DRIVER_MANAGED =3D 0x8, /* always detected via a driver */ + MEMBLOCK_MOVABLE =3D 0x10, /* designated movable block */ }; =20 /** @@ -125,6 +126,8 @@ int memblock_clear_hotplug(phys_addr_t base, phys_addr_= t size); int memblock_mark_mirror(phys_addr_t base, phys_addr_t size); int memblock_mark_nomap(phys_addr_t base, phys_addr_t size); int memblock_clear_nomap(phys_addr_t base, phys_addr_t size); +int memblock_mark_movable(phys_addr_t base, phys_addr_t size); +int memblock_clear_movable(phys_addr_t base, phys_addr_t size); =20 void memblock_free_all(void); void memblock_free(void *ptr, size_t size); @@ -265,6 +268,11 @@ static inline bool memblock_is_driver_managed(struct m= emblock_region *m) return m->flags & MEMBLOCK_DRIVER_MANAGED; } =20 +static inline bool memblock_is_movable(struct memblock_region *m) +{ + return m->flags & MEMBLOCK_MOVABLE; +} + int memblock_search_pfn_nid(unsigned long pfn, unsigned long *start_pfn, unsigned long *end_pfn); void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn, diff --git a/mm/memblock.c b/mm/memblock.c index 25fd0626a9e7..794a099ec3e2 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -992,6 +992,30 @@ int __init_memblock memblock_clear_nomap(phys_addr_t b= ase, phys_addr_t size) return memblock_setclr_flag(base, size, 0, MEMBLOCK_NOMAP); } =20 +/** + * memblock_mark_movable - Mark designated movable block with MEMBLOCK_MOV= ABLE. + * @base: the base phys addr of the region + * @size: the size of the region + * + * Return: 0 on success, -errno on failure. + */ +int __init_memblock memblock_mark_movable(phys_addr_t base, phys_addr_t si= ze) +{ + return memblock_setclr_flag(base, size, 1, MEMBLOCK_MOVABLE); +} + +/** + * memblock_clear_movable - Clear flag MEMBLOCK_MOVABLE for a specified re= gion. + * @base: the base phys addr of the region + * @size: the size of the region + * + * Return: 0 on success, -errno on failure. + */ +int __init_memblock memblock_clear_movable(phys_addr_t base, phys_addr_t s= ize) +{ + return memblock_setclr_flag(base, size, 0, MEMBLOCK_MOVABLE); +} + static bool should_skip_region(struct memblock_type *type, struct memblock_region *m, int nid, int flags) --=20 2.34.1 From nobody Sun May 19 11:06:55 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9EE64C6FA99 for ; Sat, 11 Mar 2023 00:40:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230199AbjCKAkw (ORCPT ); Fri, 10 Mar 2023 19:40:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53442 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230372AbjCKAkc (ORCPT ); Fri, 10 Mar 2023 19:40:32 -0500 Received: from mail-qt1-x835.google.com (mail-qt1-x835.google.com [IPv6:2607:f8b0:4864:20::835]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A7AA814089C; Fri, 10 Mar 2023 16:40:10 -0800 (PST) Received: by mail-qt1-x835.google.com with SMTP id l18so7766502qtp.1; Fri, 10 Mar 2023 16:40:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1678495209; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5jXGljhJMOsOLQ0WVWyx9Qh8aJ4axzukipUofLqyH0c=; b=E6LvORJlN2jcni/5ioUWmsjOnXgl7B775URlFqKgBop4b56C+en9J/o5VQci/Gr3HZ mDdqxrmhy35dEWIEr0ibeFXQqzumxlSJvxH0I05R+AxhZTacM9/FvPnPJwWedeUC3cvi pQeNNZX6OTbmRuNopd8WQ1Ls+zrYG6cPajDLFXJgfwUMMroLFQt144s5fzywH9RwVKGB dmBEqLB4kzE3dqcZNzL4WQn30AqT97RuhmZuQIzcafVEHXu3L1817x/L3l9h/Gi2ryuc MrvohJM2ORGZfictxJHfVmKJwsQQjwobLzwvo9n/kzxDzegIeFbxcKr7T/9FtgMrtnZZ mFBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678495209; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5jXGljhJMOsOLQ0WVWyx9Qh8aJ4axzukipUofLqyH0c=; b=wCrNJ4AngMSblqHI3WNxmVOo5fqFUAhrU9uwTILpP7crsDFq10YMGVRXIAKcT9/8dY wynqxZQ30bK4jzqVk5SFRnQsUfCftxVhBSaIv/BmQqNCdDQkcDyqHdn7K/RKy1Bjju+5 XcwONWM8do8KmEpKh43ioreWmLid6zJv3ZCB/FLlwgcWPxBIEPoh2PZwXVfCzR7uFh7S P0LdbLPX9DCctsq7zQbfsjx9IcqHgrVNVUOM/oh+aalameWZbyesrDwaEUT6TppidLKv ss4v0CEmN4ikjn3f0vEKnksmF+MTns8ZY6G5OnXp4ml0zsPm9OCWQE7IQniW9wadXsMy wBCA== X-Gm-Message-State: AO0yUKVpto3NaC/IE36gL54ro3MxQxmDp3YFSpDTVFok2UgNiQNgeYf5 TWRjCaxaSd0t0VYF99XCYuE= X-Google-Smtp-Source: AK7set82JZQLeG0mnfSCEhr7dXawy4oS4wEWrjbePSKQBRygykBuuvLPI0fUBxrCTyPQT+oR+bashA== X-Received: by 2002:a05:622a:590:b0:3a7:e625:14f with SMTP id c16-20020a05622a059000b003a7e625014fmr44499078qtb.9.1678495208862; Fri, 10 Mar 2023 16:40:08 -0800 (PST) Received: from stbirv-lnx-1.igp.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id a5-20020ac84345000000b003bfaff2a6b9sm868874qtn.10.2023.03.10.16.40.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 10 Mar 2023 16:40:08 -0800 (PST) From: Doug Berger To: Andrew Morton Cc: Jonathan Corbet , Mike Rapoport , Borislav Petkov , "Paul E. McKenney" , Randy Dunlap , Neeraj Upadhyay , Damien Le Moal , Kim Phillips , "Steven Rostedt (Google)" , Michal Hocko , Johannes Weiner , Vlastimil Babka , KOSAKI Motohiro , Mel Gorman , Muchun Song , Mike Kravetz , Florian Fainelli , David Hildenbrand , Oscar Salvador , Joonsoo Kim , Sukadev Bhattiprolu , Rik van Riel , Roman Gushchin , Minchan Kim , Chris Goldsworthy , "Georgi Djakov" , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Doug Berger Subject: [PATCH v4 7/9] mm/dmb: Introduce Designated Movable Blocks Date: Fri, 10 Mar 2023 16:38:53 -0800 Message-Id: <20230311003855.645684-8-opendmb@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230311003855.645684-1-opendmb@gmail.com> References: <20230311003855.645684-1-opendmb@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Designated Movable Blocks are blocks of memory that are composed of one or more adjacent memblocks that have the MEMBLOCK_MOVABLE designation. These blocks must be reserved before receiving that designation and will be located in the ZONE_MOVABLE zone rather than any other zone that may span them. Signed-off-by: Doug Berger --- include/linux/dmb.h | 29 ++++++++++++++ mm/Kconfig | 12 ++++++ mm/Makefile | 1 + mm/dmb.c | 91 +++++++++++++++++++++++++++++++++++++++++++ mm/memblock.c | 6 ++- mm/page_alloc.c | 95 ++++++++++++++++++++++++++++++++++++++------- 6 files changed, 220 insertions(+), 14 deletions(-) create mode 100644 include/linux/dmb.h create mode 100644 mm/dmb.c diff --git a/include/linux/dmb.h b/include/linux/dmb.h new file mode 100644 index 000000000000..fa2976c0fa21 --- /dev/null +++ b/include/linux/dmb.h @@ -0,0 +1,29 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __DMB_H__ +#define __DMB_H__ + +#include + +/* + * the buddy -- especially pageblock merging and alloc_contig_range() + * -- can deal with only some pageblocks of a higher-order page being + * MIGRATE_MOVABLE, we can use pageblock_nr_pages. + */ +#define DMB_MIN_ALIGNMENT_PAGES pageblock_nr_pages +#define DMB_MIN_ALIGNMENT_BYTES (PAGE_SIZE * DMB_MIN_ALIGNMENT_PAGES) + +enum { + DMB_DISJOINT =3D 0, + DMB_INTERSECTS, + DMB_MIXED, +}; + +struct dmb; + +extern int dmb_intersects(unsigned long spfn, unsigned long epfn); + +extern int dmb_reserve(phys_addr_t base, phys_addr_t size, + struct dmb **res_dmb); +extern void dmb_init_region(struct memblock_region *region); + +#endif diff --git a/mm/Kconfig b/mm/Kconfig index 4751031f3f05..85ac5f136487 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -913,6 +913,18 @@ config CMA_AREAS =20 If unsure, leave the default value "7" in UMA and "19" in NUMA. =20 +config DMB_COUNT + int "Maximum count of Designated Movable Blocks" + default 19 if NUMA + default 7 + help + Designated Movable Blocks are blocks of memory that can be used + by the page allocator exclusively for movable pages. They are + managed in ZONE_MOVABLE but may overlap with other zones. This + parameter sets the maximum number of DMBs in the system. + + If unsure, leave the default value "7" in UMA and "19" in NUMA. + config MEM_SOFT_DIRTY bool "Track memory changes" depends on CHECKPOINT_RESTORE && HAVE_ARCH_SOFT_DIRTY && PROC_FS diff --git a/mm/Makefile b/mm/Makefile index 8e105e5b3e29..824be8fb11cd 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -67,6 +67,7 @@ obj-y +=3D page-alloc.o obj-y +=3D init-mm.o obj-y +=3D memblock.o obj-y +=3D $(memory-hotplug-y) +obj-y +=3D dmb.o =20 ifdef CONFIG_MMU obj-$(CONFIG_ADVISE_SYSCALLS) +=3D madvise.o diff --git a/mm/dmb.c b/mm/dmb.c new file mode 100644 index 000000000000..f6c4e2662e0f --- /dev/null +++ b/mm/dmb.c @@ -0,0 +1,91 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Designated Movable Block + */ + +#define pr_fmt(fmt) "dmb: " fmt + +#include + +struct dmb { + unsigned long start_pfn; + unsigned long end_pfn; +}; + +static struct dmb dmb_areas[CONFIG_DMB_COUNT]; +static unsigned int dmb_area_count; + +int dmb_intersects(unsigned long spfn, unsigned long epfn) +{ + int i; + struct dmb *dmb; + + if (spfn >=3D epfn) + return DMB_DISJOINT; + + for (i =3D 0; i < dmb_area_count; i++) { + dmb =3D &dmb_areas[i]; + if (spfn >=3D dmb->end_pfn) + continue; + if (epfn <=3D dmb->start_pfn) + return DMB_DISJOINT; + if (spfn >=3D dmb->start_pfn && epfn <=3D dmb->end_pfn) + return DMB_INTERSECTS; + else + return DMB_MIXED; + } + + return DMB_DISJOINT; +} +EXPORT_SYMBOL(dmb_intersects); + +int __init dmb_reserve(phys_addr_t base, phys_addr_t size, + struct dmb **res_dmb) +{ + struct dmb *dmb; + + /* Sanity checks */ + if (!size || !memblock_is_region_reserved(base, size)) + return -EINVAL; + + /* ensure minimal alignment required by mm core */ + if (!IS_ALIGNED(base | size, DMB_MIN_ALIGNMENT_BYTES)) + return -EINVAL; + + if (dmb_area_count =3D=3D ARRAY_SIZE(dmb_areas)) { + pr_warn("Not enough slots for DMB reserved regions!\n"); + return -ENOSPC; + } + + /* + * Each reserved area must be initialised later, when more kernel + * subsystems (like slab allocator) are available. + */ + dmb =3D &dmb_areas[dmb_area_count++]; + + dmb->start_pfn =3D PFN_DOWN(base); + dmb->end_pfn =3D PFN_DOWN(base + size); + if (res_dmb) + *res_dmb =3D dmb; + + memblock_mark_movable(base, size); + return 0; +} + +void __init dmb_init_region(struct memblock_region *region) +{ + unsigned long pfn; + int i; + + for (pfn =3D memblock_region_memory_base_pfn(region); + pfn < memblock_region_memory_end_pfn(region); + pfn +=3D pageblock_nr_pages) { + struct page *page =3D pfn_to_page(pfn); + + for (i =3D 0; i < pageblock_nr_pages; i++) + set_page_zone(page + i, ZONE_MOVABLE); + + /* free reserved pageblocks to page allocator */ + init_reserved_pageblock(page); + } +} diff --git a/mm/memblock.c b/mm/memblock.c index 794a099ec3e2..3db06288a5c0 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -16,6 +16,7 @@ #include #include #include +#include =20 #include #include @@ -2103,13 +2104,16 @@ static void __init memmap_init_reserved_pages(void) for_each_reserved_mem_range(i, &start, &end) reserve_bootmem_region(start, end); =20 - /* and also treat struct pages for the NOMAP regions as PageReserved */ for_each_mem_region(region) { + /* treat struct pages for the NOMAP regions as PageReserved */ if (memblock_is_nomap(region)) { start =3D region->base; end =3D start + region->size; reserve_bootmem_region(start, end); } + /* move Designated Movable Block pages to ZONE_MOVABLE */ + if (memblock_is_movable(region)) + dmb_init_region(region); } } =20 diff --git a/mm/page_alloc.c b/mm/page_alloc.c index da1af678995b..26846a9a9fc4 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -76,6 +76,7 @@ #include #include #include +#include #include #include #include @@ -414,6 +415,8 @@ static unsigned long required_kernelcore __initdata; static unsigned long required_kernelcore_percent __initdata; static unsigned long required_movablecore __initdata; static unsigned long required_movablecore_percent __initdata; +static unsigned long min_dmb_pfn[MAX_NUMNODES] __initdata; +static unsigned long max_dmb_pfn[MAX_NUMNODES] __initdata; static unsigned long zone_movable_pfn[MAX_NUMNODES] __initdata; bool mirrored_kernelcore __initdata_memblock; =20 @@ -2171,7 +2174,7 @@ static int __init deferred_init_memmap(void *data) } zone_empty: /* Sanity check that the next zone really is unpopulated */ - WARN_ON(++zid < MAX_NR_ZONES && populated_zone(++zone)); + WARN_ON(++zid < ZONE_MOVABLE && populated_zone(++zone)); =20 pr_info("node %d deferred pages initialised in %ums\n", pgdat->node_id, jiffies_to_msecs(jiffies - start)); @@ -7022,6 +7025,10 @@ static void __init memmap_init_zone_range(struct zon= e *zone, unsigned long zone_end_pfn =3D zone_start_pfn + zone->spanned_pages; int nid =3D zone_to_nid(zone), zone_id =3D zone_idx(zone); =20 + /* Skip overlap of ZONE_MOVABLE */ + if (zone_id =3D=3D ZONE_MOVABLE && zone_start_pfn < *hole_pfn) + zone_start_pfn =3D *hole_pfn; + start_pfn =3D clamp(start_pfn, zone_start_pfn, zone_end_pfn); end_pfn =3D clamp(end_pfn, zone_start_pfn, zone_end_pfn); =20 @@ -7482,6 +7489,12 @@ static unsigned long __init zone_spanned_pages_in_no= de(int nid, node_start_pfn, node_end_pfn, zone_start_pfn, zone_end_pfn); =20 + if (zone_type =3D=3D ZONE_MOVABLE && max_dmb_pfn[nid]) { + if (*zone_start_pfn =3D=3D *zone_end_pfn) + *zone_end_pfn =3D max_dmb_pfn[nid]; + *zone_start_pfn =3D min(*zone_start_pfn, min_dmb_pfn[nid]); + } + /* Check that this node has pages within the zone's required range */ if (*zone_end_pfn < node_start_pfn || *zone_start_pfn > node_end_pfn) return 0; @@ -7550,12 +7563,21 @@ static unsigned long __init zone_absent_pages_in_no= de(int nid, &zone_start_pfn, &zone_end_pfn); nr_absent =3D __absent_pages_in_range(nid, zone_start_pfn, zone_end_pfn); =20 + if (zone_type =3D=3D ZONE_MOVABLE && max_dmb_pfn[nid]) { + if (zone_start_pfn =3D=3D zone_end_pfn) + zone_end_pfn =3D max_dmb_pfn[nid]; + else + zone_end_pfn =3D zone_movable_pfn[nid]; + zone_start_pfn =3D min(zone_start_pfn, min_dmb_pfn[nid]); + nr_absent +=3D zone_end_pfn - zone_start_pfn; + } + /* * ZONE_MOVABLE handling. - * Treat pages to be ZONE_MOVABLE in ZONE_NORMAL as absent pages + * Treat pages to be ZONE_MOVABLE in other zones as absent pages * and vice versa. */ - if (mirrored_kernelcore && zone_movable_pfn[nid]) { + if (zone_movable_pfn[nid]) { unsigned long start_pfn, end_pfn; struct memblock_region *r; =20 @@ -7565,6 +7587,19 @@ static unsigned long __init zone_absent_pages_in_nod= e(int nid, end_pfn =3D clamp(memblock_region_memory_end_pfn(r), zone_start_pfn, zone_end_pfn); =20 + if (memblock_is_movable(r)) { + if (zone_type !=3D ZONE_MOVABLE) { + nr_absent +=3D end_pfn - start_pfn; + continue; + } + + nr_absent -=3D end_pfn - start_pfn; + continue; + } + + if (!mirrored_kernelcore) + continue; + if (zone_type =3D=3D ZONE_MOVABLE && memblock_is_mirror(r)) nr_absent +=3D end_pfn - start_pfn; @@ -7584,18 +7619,27 @@ static void __init calculate_node_totalpages(struct= pglist_data *pgdat, { unsigned long totalpages =3D 0; enum zone_type i; + int nid =3D pgdat->node_id; + + /* + * If Designated Movable Blocks are defined on this node, ensure that + * zone_movable_pfn is also defined for this node. + */ + if (max_dmb_pfn[nid] && !zone_movable_pfn[nid]) + zone_movable_pfn[nid] =3D min(node_end_pfn, + arch_zone_highest_possible_pfn[movable_zone]); =20 for (i =3D 0; i < MAX_NR_ZONES; i++) { struct zone *zone =3D pgdat->node_zones + i; unsigned long zone_start_pfn, zone_end_pfn; unsigned long spanned, absent, size; =20 - spanned =3D zone_spanned_pages_in_node(pgdat->node_id, i, + spanned =3D zone_spanned_pages_in_node(nid, i, node_start_pfn, node_end_pfn, &zone_start_pfn, &zone_end_pfn); - absent =3D zone_absent_pages_in_node(pgdat->node_id, i, + absent =3D zone_absent_pages_in_node(nid, i, node_start_pfn, node_end_pfn); =20 @@ -8047,15 +8091,27 @@ unsigned long __init node_map_pfn_alignment(void) static unsigned long __init early_calculate_totalpages(void) { unsigned long totalpages =3D 0; - unsigned long start_pfn, end_pfn; - int i, nid; + struct memblock_region *r; =20 - for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) { - unsigned long pages =3D end_pfn - start_pfn; + for_each_mem_region(r) { + unsigned long start_pfn, end_pfn, pages; + int nid; + + nid =3D memblock_get_region_node(r); + start_pfn =3D memblock_region_memory_base_pfn(r); + end_pfn =3D memblock_region_memory_end_pfn(r); =20 - totalpages +=3D pages; - if (pages) + pages =3D end_pfn - start_pfn; + if (pages) { + totalpages +=3D pages; node_set_state(nid, N_MEMORY); + if (memblock_is_movable(r)) { + if (start_pfn < min_dmb_pfn[nid]) + min_dmb_pfn[nid] =3D start_pfn; + if (end_pfn > max_dmb_pfn[nid]) + max_dmb_pfn[nid] =3D end_pfn; + } + } } return totalpages; } @@ -8068,7 +8124,7 @@ static unsigned long __init early_calculate_totalpage= s(void) */ static void __init find_zone_movable_pfns_for_nodes(void) { - int i, nid; + int nid; unsigned long usable_startpfn; unsigned long kernelcore_node, kernelcore_remaining; /* save the state before borrow the nodemask */ @@ -8196,13 +8252,24 @@ static void __init find_zone_movable_pfns_for_nodes= (void) kernelcore_remaining =3D kernelcore_node; =20 /* Go through each range of PFNs within this node */ - for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) { + for_each_mem_region(r) { unsigned long size_pages; =20 + if (memblock_get_region_node(r) !=3D nid) + continue; + + start_pfn =3D memblock_region_memory_base_pfn(r); + end_pfn =3D memblock_region_memory_end_pfn(r); start_pfn =3D max(start_pfn, zone_movable_pfn[nid]); if (start_pfn >=3D end_pfn) continue; =20 + /* Skip over Designated Movable Blocks */ + if (memblock_is_movable(r)) { + zone_movable_pfn[nid] =3D end_pfn; + continue; + } + /* Account for what is only usable for kernelcore */ if (start_pfn < usable_startpfn) { unsigned long kernel_pages; @@ -8351,6 +8418,8 @@ void __init free_area_init(unsigned long *max_zone_pf= n) } =20 /* Find the PFNs that ZONE_MOVABLE begins at in each node */ + memset(min_dmb_pfn, 0xff, sizeof(min_dmb_pfn)); + memset(max_dmb_pfn, 0, sizeof(max_dmb_pfn)); memset(zone_movable_pfn, 0, sizeof(zone_movable_pfn)); find_zone_movable_pfns_for_nodes(); =20 --=20 2.34.1 From nobody Sun May 19 11:06:55 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C28C1C6FA99 for ; Sat, 11 Mar 2023 00:41:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229998AbjCKAlI (ORCPT ); Fri, 10 Mar 2023 19:41:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54034 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230107AbjCKAkk (ORCPT ); Fri, 10 Mar 2023 19:40:40 -0500 Received: from mail-qv1-xf32.google.com (mail-qv1-xf32.google.com [IPv6:2607:f8b0:4864:20::f32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 30FFA1408A9; Fri, 10 Mar 2023 16:40:14 -0800 (PST) Received: by mail-qv1-xf32.google.com with SMTP id op8so4747911qvb.11; Fri, 10 Mar 2023 16:40:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1678495212; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=wtzU35tbRJ3gnQ1OO0E5TSthDsuwzlQZXdQbmvmImVI=; b=EI9ItuekjPEI1ZTS23O9YP0nD+tkWC1qq0vSYnml6KNfIm6jcJv7iWb2BovYcnb5TQ vsb0iWRfzAD9R7HdHwlCC//JPb2p13EA1HOWSZY4Mpa0+6D6buOrOt0W7dpGXe71MEVH mXEFC1IAXgt4HvEhYylEkVF5kKLJ8WaOzWEWNpirupjVV/ttZbYjsJxi/PzCRQWqrITT 0EFqsCzdOwUAOP+xT9WSTKlgrasqopGGQm/8Ro3V92TYbNpywUEgnYZ7kLwtPIMBt+F6 Hz1HGTW2qLSSjxQovQl6WveZTAjkHGVhQlVnN3U8qHEi++x0ld2rx/89o4Aujd1e+0Y+ hKjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678495212; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wtzU35tbRJ3gnQ1OO0E5TSthDsuwzlQZXdQbmvmImVI=; b=ctPVhDVhwhP/GpMnBT0xD9grhhScFr1H095W4WEKWQJwsjoaWCALMDvbhOSlUC4kKQ p3t4SNsO8tU6f1y5z1Se+K0r63Mlw49QQu/gChD6mf0iMjhZBhZ7YU5mWR7pfnosCXR/ /YQSY7zt185v5DKtEIp2CL/9C9fEb7blpi0NHTB6lQmK2+r48eePqqfW7Il7G05JNWaS ScDEVUCs+VZWvdFuMShXSaVJa3els/ka9ZnxSicMjbJPKR8BR/1tkIiRX9XADna/UJwh DT9PQGidk5i7vqlDXb6Uahc/3X/VGG5pvAo4UbdH9kJrpshpihZHEY+JgjMd/zfxVF/P +gsA== X-Gm-Message-State: AO0yUKUJZ81weas6T+tiBX55sMJLPcHtBSD+deDVV7ZQGthCWt7z2n1e erpZFI/HxptYfLakqyn1ZzA= X-Google-Smtp-Source: AK7set/eL7lVruxcZE0wUHCIKgMg9nihcg2qsfNt94iZ5Zprmu4kPjrI8/xfFMTDXQ978rNM+ux+fA== X-Received: by 2002:a05:6214:2124:b0:537:708d:3fef with SMTP id r4-20020a056214212400b00537708d3fefmr1431894qvc.38.1678495212405; Fri, 10 Mar 2023 16:40:12 -0800 (PST) Received: from stbirv-lnx-1.igp.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id a5-20020ac84345000000b003bfaff2a6b9sm868874qtn.10.2023.03.10.16.40.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 10 Mar 2023 16:40:12 -0800 (PST) From: Doug Berger To: Andrew Morton Cc: Jonathan Corbet , Mike Rapoport , Borislav Petkov , "Paul E. McKenney" , Randy Dunlap , Neeraj Upadhyay , Damien Le Moal , Kim Phillips , "Steven Rostedt (Google)" , Michal Hocko , Johannes Weiner , Vlastimil Babka , KOSAKI Motohiro , Mel Gorman , Muchun Song , Mike Kravetz , Florian Fainelli , David Hildenbrand , Oscar Salvador , Joonsoo Kim , Sukadev Bhattiprolu , Rik van Riel , Roman Gushchin , Minchan Kim , Chris Goldsworthy , "Georgi Djakov" , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Doug Berger Subject: [PATCH v4 8/9] mm/page_alloc: make alloc_contig_pages DMB aware Date: Fri, 10 Mar 2023 16:38:54 -0800 Message-Id: <20230311003855.645684-9-opendmb@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230311003855.645684-1-opendmb@gmail.com> References: <20230311003855.645684-1-opendmb@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Designated Movable Blocks are skipped when attempting to allocate contiguous pages. Doing per page validation across all spanned pages within a zone can be extra inefficient when Designated Movable Blocks create large overlaps between zones. Use dmb_intersects() within pfn_range_valid_contig as an early check to signal the range is not valid. The zone_movable_pfn array which represents the start of non- overlapped ZONE_MOVABLE on the node is now preserved to be used at runtime to skip over any DMB-only portion of the zone. Signed-off-by: Doug Berger --- mm/page_alloc.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 26846a9a9fc4..d4358d19d5a1 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -417,7 +417,7 @@ static unsigned long required_movablecore __initdata; static unsigned long required_movablecore_percent __initdata; static unsigned long min_dmb_pfn[MAX_NUMNODES] __initdata; static unsigned long max_dmb_pfn[MAX_NUMNODES] __initdata; -static unsigned long zone_movable_pfn[MAX_NUMNODES] __initdata; +static unsigned long zone_movable_pfn[MAX_NUMNODES]; bool mirrored_kernelcore __initdata_memblock; =20 /* movable_zone is the "real" zone pages in ZONE_MOVABLE are taken from */ @@ -9503,6 +9503,9 @@ static bool pfn_range_valid_contig(struct zone *z, un= signed long start_pfn, unsigned long i, end_pfn =3D start_pfn + nr_pages; struct page *page; =20 + if (dmb_intersects(start_pfn, end_pfn)) + return false; + for (i =3D start_pfn; i < end_pfn; i++) { page =3D pfn_to_online_page(i); if (!page) @@ -9559,7 +9562,10 @@ struct page *alloc_contig_pages(unsigned long nr_pag= es, gfp_t gfp_mask, gfp_zone(gfp_mask), nodemask) { spin_lock_irqsave(&zone->lock, flags); =20 - pfn =3D ALIGN(zone->zone_start_pfn, nr_pages); + if (zone_idx(zone) =3D=3D ZONE_MOVABLE && zone_movable_pfn[nid]) + pfn =3D ALIGN(zone_movable_pfn[nid], nr_pages); + else + pfn =3D ALIGN(zone->zone_start_pfn, nr_pages); while (zone_spans_last_pfn(zone, pfn, nr_pages)) { if (pfn_range_valid_contig(zone, pfn, nr_pages)) { /* --=20 2.34.1 From nobody Sun May 19 11:06:55 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52644C6FA99 for ; Sat, 11 Mar 2023 00:41:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230523AbjCKAlN (ORCPT ); Fri, 10 Mar 2023 19:41:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54232 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230246AbjCKAkr (ORCPT ); Fri, 10 Mar 2023 19:40:47 -0500 Received: from mail-qt1-x832.google.com (mail-qt1-x832.google.com [IPv6:2607:f8b0:4864:20::832]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0E1FC13B28B; Fri, 10 Mar 2023 16:40:20 -0800 (PST) Received: by mail-qt1-x832.google.com with SMTP id l13so7748197qtv.3; Fri, 10 Mar 2023 16:40:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1678495216; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ZUweBIxK1ApmrVss49z+UC3wE/nMfv3P38hD7KfWAG8=; b=b8xlkMmx2Va+LMTaL4tjD8wYbm4UC16eGq/CrQTHkbnm9E2UfknGNDOooRPnWFHGH6 HXTiYXZS1Z4zgkWMhxH4u7I5AbqAc5eeOZhtfDX036xKJJ+55cFVd3c276/H96DePLC8 NxCS1PwmJLpvBg7BPQFk06uiCf28nVWffj5kbhUi3KznMOjevla6Bq8op4cw4ED6lVfx 6utHCbAoUN93i+J0xNphYV6jvsuIXhNDDr9S6CzOTcNB+n/aroO0U+CSERlTywJM50e6 ZLL4BiS8iPRnwj68gAb4ZS7lToep1wKzor8UHnBFdOyaf37GDfSlrp6Wutseal3QZY+N YWAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678495216; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZUweBIxK1ApmrVss49z+UC3wE/nMfv3P38hD7KfWAG8=; b=WcEJQl+74H0HBoFCpm3XER4uC5sPOXs4186zlVu3VSTM9EW7KkCbA1B3ktiGA2yb37 JZHxceZjQ7w9F1FKz1qZpo4mHp+WVLmdfvoFQZJggOSyr+t+Gvo9t9LEm02jYPuCzxbN E9UHZKIaZK//oh4dHIIT6hxk24lUNG+ofEopxB+RniiaA52Ime8yPJK1uvuPFpTAA6K9 jks2nxAqEI8H6NdexDCurGy60W7StygFTW4hB647bS3Vip+kw/0GSeHtCQ5xe1UcIEFa NK5YEpsI1+pwCFpIVJEPYww3AlXTW9pPDkqNvR/C8WO74nW6zIX9x7c9KjgXdNUb+8v4 uZyg== X-Gm-Message-State: AO0yUKWf1++j8yWycFvI/mYawxngxhllT0WaC5Dt+EVgbWf1D3ye5z/H QlG6SQppFAFHZCztQkULKMA= X-Google-Smtp-Source: AK7set9au5WlIK4MR23yBqVZHrwwWy+PuYch9qrfX49bao0ol0Y+CZl0Oj9zvXShGjOoENldS5rAGQ== X-Received: by 2002:a05:622a:5c9:b0:3bf:d366:50d5 with SMTP id d9-20020a05622a05c900b003bfd36650d5mr42637917qtb.56.1678495215959; Fri, 10 Mar 2023 16:40:15 -0800 (PST) Received: from stbirv-lnx-1.igp.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id a5-20020ac84345000000b003bfaff2a6b9sm868874qtn.10.2023.03.10.16.40.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 10 Mar 2023 16:40:15 -0800 (PST) From: Doug Berger To: Andrew Morton Cc: Jonathan Corbet , Mike Rapoport , Borislav Petkov , "Paul E. McKenney" , Randy Dunlap , Neeraj Upadhyay , Damien Le Moal , Kim Phillips , "Steven Rostedt (Google)" , Michal Hocko , Johannes Weiner , Vlastimil Babka , KOSAKI Motohiro , Mel Gorman , Muchun Song , Mike Kravetz , Florian Fainelli , David Hildenbrand , Oscar Salvador , Joonsoo Kim , Sukadev Bhattiprolu , Rik van Riel , Roman Gushchin , Minchan Kim , Chris Goldsworthy , "Georgi Djakov" , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Doug Berger Subject: [PATCH v4 9/9] mm/page_alloc: allow base for movablecore Date: Fri, 10 Mar 2023 16:38:55 -0800 Message-Id: <20230311003855.645684-10-opendmb@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230311003855.645684-1-opendmb@gmail.com> References: <20230311003855.645684-1-opendmb@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" A Designated Movable Block can be created by including the base address of the block when specifying a movablecore range on the kernel command line. Signed-off-by: Doug Berger --- .../admin-guide/kernel-parameters.txt | 14 ++++++- mm/page_alloc.c | 38 ++++++++++++++++--- 2 files changed, 45 insertions(+), 7 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentatio= n/admin-guide/kernel-parameters.txt index 6221a1d057dd..5e3bf6e0a264 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -3353,7 +3353,7 @@ reporting absolute coordinates, such as tablets =20 movablecore=3D [KNL,X86,IA-64,PPC] - Format: nn[KMGTPE] | nn% + Format: nn[KMGTPE] | nn[KMGTPE]@ss[KMGTPE] | nn% This parameter is the complement to kernelcore=3D, it specifies the amount of memory used for migratable allocations. If both kernelcore and movablecore is @@ -3363,6 +3363,18 @@ that the amount of memory usable for all allocations is not too small. =20 + If @ss[KMGTPE] is included, memory within the region + from ss to ss+nn will be designated as a movable block + and included in ZONE_MOVABLE. Designated Movable Blocks + must be aligned to pageblock_order. Designated Movable + Blocks take priority over values of kernelcore=3D and are + considered part of any memory specified by more general + movablecore=3D values. + Multiple Designated Movable Blocks may be specified, + comma delimited. + Example: + movablecore=3D100M@2G,100M@3G,1G@1024G + movable_node [KNL] Boot-time switch to make hotplugable memory NUMA nodes to be movable. This means that the memory of such nodes will be usable only for movable diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d4358d19d5a1..cb3c55acf7de 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8504,9 +8504,9 @@ void __init free_area_init(unsigned long *max_zone_pf= n) } =20 static int __init cmdline_parse_core(char *p, unsigned long *core, - unsigned long *percent) + unsigned long *percent, bool movable) { - unsigned long long coremem; + unsigned long long coremem, address; char *endptr; =20 if (!p) @@ -8521,6 +8521,17 @@ static int __init cmdline_parse_core(char *p, unsign= ed long *core, *percent =3D coremem; } else { coremem =3D memparse(p, &p); + if (movable && *p =3D=3D '@') { + address =3D memparse(++p, &p); + if (*p !=3D '\0' || + !memblock_is_region_memory(address, coremem) || + memblock_is_region_reserved(address, coremem)) + return -EINVAL; + memblock_reserve(address, coremem); + return dmb_reserve(address, coremem, NULL); + } else if (*p !=3D '\0') { + return -EINVAL; + } /* Paranoid check that UL is enough for the coremem value */ WARN_ON((coremem >> PAGE_SHIFT) > ULONG_MAX); =20 @@ -8543,17 +8554,32 @@ static int __init cmdline_parse_kernelcore(char *p) } =20 return cmdline_parse_core(p, &required_kernelcore, - &required_kernelcore_percent); + &required_kernelcore_percent, false); } =20 /* * movablecore=3Dsize sets the amount of memory for use for allocations th= at - * can be reclaimed or migrated. + * can be reclaimed or migrated. movablecore=3Dsize@base defines a Designa= ted + * Movable Block. */ static int __init cmdline_parse_movablecore(char *p) { - return cmdline_parse_core(p, &required_movablecore, - &required_movablecore_percent); + int ret =3D -EINVAL; + + while (p) { + char *k =3D strchr(p, ','); + + if (k) + *k++ =3D 0; + + ret =3D cmdline_parse_core(p, &required_movablecore, + &required_movablecore_percent, true); + if (ret) + break; + p =3D k; + } + + return ret; } =20 early_param("kernelcore", cmdline_parse_kernelcore); --=20 2.34.1