From nobody Tue May 7 20:06:38 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39823C32771 for ; Wed, 28 Sep 2022 22:33:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234464AbiI1Wdx (ORCPT ); Wed, 28 Sep 2022 18:33:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60548 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234427AbiI1Wdt (ORCPT ); Wed, 28 Sep 2022 18:33:49 -0400 Received: from mail-qt1-x829.google.com (mail-qt1-x829.google.com [IPv6:2607:f8b0:4864:20::829]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EFB25F8FB5; Wed, 28 Sep 2022 15:33:47 -0700 (PDT) Received: by mail-qt1-x829.google.com with SMTP id gb14so251359qtb.3; Wed, 28 Sep 2022 15:33:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=hB/8CHaP3epa66ZDI7eIxcYHinx9lai7E+SzdR8JK34=; b=WAksJPCtovML0PXfuV6MvtFhaquK0tq65BQzPSMUX0EZb3CqP+duiDKDvHl3Sit7m7 7aOY0PEle5W8Mi2h7QigBTY8Lg9rVOpp48AzS1DIJ4l7/+09QqhqnWAx1czOCbInzF8d x0YJ/P2gInkW3C6VhvjtTXDTumcsev/S/pTT+ignxMH0mOrEue1fBAf5GE5WDzaQFzF0 RdlNkUbc/JOrJS4Z1j8l16rMuCAMgQ8Ox/Jf5BztPHKu7DNY6R4apLKF8tfNNMjx27+d 3I23mrMNzl+yyZHs1Odm+efLgckSB2i6WXUlZx+oBQ50hBKajiK77oTPrOY+jVboWfch SIFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=hB/8CHaP3epa66ZDI7eIxcYHinx9lai7E+SzdR8JK34=; b=aDgEHy+Jf4S6sR3f3OL/08/fNC64gysr9hvggS1AbxsTcZicEe4mNclM7gxacoTKK1 auuVC1ruPv5ldI4y6uARbAcBLlEtdzypTskLnJzAmmn7s6ahUX+8enoTyYNjMLXMoBOh 35m01KCpWNYK0ImigPcklKF1lHTnfwm8c8ycDSgfuRAIJ9qNxrY1eaaEqGtxHFWniqLn Co8k0sIxlS6/R20yh7Wj5y9GasjmNqUm2nhGhUJMd0TF4ZVlgExzJfSe+cMjwwjheef4 my5r3ZNhORSJcNbv+G7r4ialFXCg61bM6o/k4miPFBlPCYikiH7OksD/8INkavZj+dH9 qH7A== X-Gm-Message-State: ACrzQf0atN9mhQn2d5Qk/ZyXTKkZxwMVGBW9pe/yrilFZ5xq911tqHJJ nZjR3j8y92SXYW8o+ed1+HI= X-Google-Smtp-Source: AMsMyM5iPvaKGQ+lU/wA15lUDKduPyVp0Psny3VAxI8TVl+GMPXgPk02cHgarNIBuaXdamZhkM7iCg== X-Received: by 2002:ac8:7f93:0:b0:35b:bbdd:5699 with SMTP id z19-20020ac87f93000000b0035bbbdd5699mr78529qtj.46.1664404427112; Wed, 28 Sep 2022 15:33:47 -0700 (PDT) Received: from stbirv-lnx-3.igp.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id j188-20020a37b9c5000000b006bb83c2be40sm3963481qkf.59.2022.09.28.15.33.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Sep 2022 15:33:46 -0700 (PDT) From: Doug Berger To: Andrew Morton Cc: Jonathan Corbet , Mike Rapoport , Borislav Petkov , "Paul E. McKenney" , Neeraj Upadhyay , Randy Dunlap , Damien Le Moal , Muchun Song , KOSAKI Motohiro , Mel Gorman , Mike Kravetz , Florian Fainelli , David Hildenbrand , Oscar Salvador , Michal Hocko , Joonsoo Kim , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Doug Berger Subject: [PATCH v2 1/9] lib/show_mem.c: display MovableOnly Date: Wed, 28 Sep 2022 15:32:53 -0700 Message-Id: <20220928223301.375229-2-opendmb@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220928223301.375229-1-opendmb@gmail.com> References: <20220928223301.375229-1-opendmb@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The comment for commit c78e93630d15 ("mm: do not walk all of system memory during show_mem") indicates it "also corrects the reporting of HighMem as HighMem/MovableOnly as ZONE_MOVABLE has similar problems to HighMem with respect to lowmem/highmem exhaustion." Presuming the similar problems are with regard to the general exclusion of kernel allocations from either zone, I believe it makes sense to include all ZONE_MOVABLE memory even on systems without HighMem. To the extent that this was the intent of the original commit I have included a "Fixes" tag, but it seems unnecessary to submit to linux-stable. Fixes: c78e93630d15 ("mm: do not walk all of system memory during show_mem") Signed-off-by: Doug Berger --- lib/show_mem.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/show_mem.c b/lib/show_mem.c index 1c26c14ffbb9..337c870a5e59 100644 --- a/lib/show_mem.c +++ b/lib/show_mem.c @@ -27,7 +27,7 @@ void show_mem(unsigned int filter, nodemask_t *nodemask) total +=3D zone->present_pages; reserved +=3D zone->present_pages - zone_managed_pages(zone); =20 - if (is_highmem_idx(zoneid)) + if (zoneid =3D=3D ZONE_MOVABLE || is_highmem_idx(zoneid)) highmem +=3D zone->present_pages; } } --=20 2.25.1 From nobody Tue May 7 20:06:38 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C36BC6FA82 for ; Wed, 28 Sep 2022 22:34:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234622AbiI1Wd7 (ORCPT ); Wed, 28 Sep 2022 18:33:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60584 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234479AbiI1Wdv (ORCPT ); Wed, 28 Sep 2022 18:33:51 -0400 Received: from mail-qt1-x82c.google.com (mail-qt1-x82c.google.com [IPv6:2607:f8b0:4864:20::82c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B8770F9617; Wed, 28 Sep 2022 15:33:50 -0700 (PDT) Received: by mail-qt1-x82c.google.com with SMTP id w2so8855194qtv.9; Wed, 28 Sep 2022 15:33:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=6/+RylwAUS1L6kGkdbzNevLkV7SCUovxqo9fjOaS6JU=; b=kC/0LNb9bktbL+TEwR3l+o6fUDutxrrSeCaNe1//+eoGCAW+xNZO4DlmQHy9VcTyfS NwAIyqoskQm/m5kv6XZ4r7B1ACZBqZis8V/cQNWWyzM9pA3Is409pWX7+mJZ+/N2vvBI YozbOIh7aY5FgIxfp56OJu1Gme5MA8TIs7VrQ4zlIKfuDoRx/M7tAtjTXSP7dzFQ50dF x6WJ/l4IoBT3rvV++v7Beym46s0QSrWDIFgJllEiXfwCqYwnQOCcV3XO4+cbD8UDzrDC Fgpv0bDTnfoI7cOl7Bdut11t3RGz5K9z2orfa9hNqZxWTfMH5f4yEJcCefUWQ9e0L4rd pWOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=6/+RylwAUS1L6kGkdbzNevLkV7SCUovxqo9fjOaS6JU=; b=T459pcZ7buM1cuU/Nt9RP4nnxyl0pU9W/4+nlQkTGdVzPcYKrPll+5ZFtj0JAZIAsD JiD53QLOQQHH9+OYAxcsCTBTI3JI88267YOG+3+TFalKSMNO55mlDVon0m3TWGhB7Xqx VE4UIeoahloZ1VYIEQvefHlLKZQcdOA2Hed3Vb1URyyOPJDzVTuyd3hjkOVVTPrFu4z4 v26Xm0nvwWpurOTlgbV2ne+TG9n9KaoDIYR8gf6WNhq/ud6PtB0eemlOmc/4fSyJbilx WD7MfsVMxYN4bsOuwA+vUgFjZqPWnajqa7blCHI5X7zPHkf2CtmLqf3yPXM9PXhYW8P2 pDNg== X-Gm-Message-State: ACrzQf07ZbxpDokL55Ej5tQsQCMfEvYcL5MEv/c8zvhd5PU9PZasFOJM hXgd42QAesnkRYkEeGY47n4= X-Google-Smtp-Source: AMsMyM6j7B74qZOjrMnDAmps4FdyuXPteigyZvK3BDi31VGk6vlpVOIX8YjcSQSYPDGzBZGstAm8RA== X-Received: by 2002:a05:622a:138f:b0:35b:bb7b:743c with SMTP id o15-20020a05622a138f00b0035bbb7b743cmr33120qtk.361.1664404429867; Wed, 28 Sep 2022 15:33:49 -0700 (PDT) Received: from stbirv-lnx-3.igp.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id j188-20020a37b9c5000000b006bb83c2be40sm3963481qkf.59.2022.09.28.15.33.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Sep 2022 15:33:49 -0700 (PDT) From: Doug Berger To: Andrew Morton Cc: Jonathan Corbet , Mike Rapoport , Borislav Petkov , "Paul E. McKenney" , Neeraj Upadhyay , Randy Dunlap , Damien Le Moal , Muchun Song , KOSAKI Motohiro , Mel Gorman , Mike Kravetz , Florian Fainelli , David Hildenbrand , Oscar Salvador , Michal Hocko , Joonsoo Kim , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Doug Berger Subject: [PATCH v2 2/9] mm/vmstat: show start_pfn when zone spans pages Date: Wed, 28 Sep 2022 15:32:54 -0700 Message-Id: <20220928223301.375229-3-opendmb@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220928223301.375229-1-opendmb@gmail.com> References: <20220928223301.375229-1-opendmb@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" A zone that overlaps with another zone may span a range of pages that are not present. In this case, displaying the start_pfn of the zone allows the zone page range to be identified. Signed-off-by: Doug Berger --- mm/vmstat.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/mm/vmstat.c b/mm/vmstat.c index 90af9a8572f5..e2f19f2b7615 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1717,6 +1717,11 @@ static void zoneinfo_show_print(struct seq_file *m, = pg_data_t *pgdat, =20 /* If unpopulated, no other information is useful */ if (!populated_zone(zone)) { + /* Show start_pfn for empty overlapped zones */ + if (zone->spanned_pages) + seq_printf(m, + "\n start_pfn: %lu", + zone->zone_start_pfn); seq_putc(m, '\n'); return; } --=20 2.25.1 From nobody Tue May 7 20:06:38 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A09AC6FA82 for ; Wed, 28 Sep 2022 22:34:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234632AbiI1WeK (ORCPT ); Wed, 28 Sep 2022 18:34:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60650 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234517AbiI1Wdy (ORCPT ); Wed, 28 Sep 2022 18:33:54 -0400 Received: from mail-qt1-x82c.google.com (mail-qt1-x82c.google.com [IPv6:2607:f8b0:4864:20::82c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EA0EAF9620; Wed, 28 Sep 2022 15:33:52 -0700 (PDT) Received: by mail-qt1-x82c.google.com with SMTP id w2so8855253qtv.9; Wed, 28 Sep 2022 15:33:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=cdp6gKq67l8BHxfvYyqMmk5Gv7X6BRjPu/o7++6IXk0=; b=URRpeJBFdppH8e/fE8Zn7u382jDrgAPrKGhvy+7Q9UV2n6l9mlnKvfUaPasfPDJqgk CD+pbglEc1OI5Fj/Ci2WqTozLVEtwoShQNGOrjiQFjdUvELjoZQctEpQ0qJlyiY1tPms mCJNKHtApeVTyeFWznRhP/VEmF+PEBVfL0Wj6h7o2eqTX4N+Rwbn+bTA29OKo7/TjhZr oFtvm1ZfUUbyXsguDLLXRWV61hzGuJRFQ3SHrca2K/fo6GpbBESMus7lg6s82a/2Xses 6HTILxedb1ZkW+eSgwHqH81i1xdlpXFU+5urGqxgxYDlYtqfNTdk8gQDSt36m8RjOxql PFVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=cdp6gKq67l8BHxfvYyqMmk5Gv7X6BRjPu/o7++6IXk0=; b=oZcPVxuDEp4CWnHiUI4KPXN2QJ04Mg02WBfFG+RSBRxnLBrxU3Bv5AWk67ecW7VazB CJ0b5IXsoD38CuJrp9Oa11cL+mvO89htFDzWulixSRP1hZLLYmI+APdGNBQwDRZg7RqI Bc5CmuKVJZzMSW+TKhwPa6ODwXWrOxNnQqpLh6+4BDoFXcO5n9J46hh2fyY1PGebjgAR OFzKMCaS6Pwlc6w770w3/16EnwxUI7lqu89rMYuYyM8GN5gO3AuKmdlQn6pAPQs38SU+ kqS8/EEIFR3DPdVGCVhx6uE4J36q+K0oi+3Ofj3TT7yp+io4FwRtjSYsFhUIe80lf79t fISA== X-Gm-Message-State: ACrzQf2ZmZNwoQIxaCDc9I0sPXTFMs8weC4WOtQ0mzQsFllFZ13ENTPE YSGdOWTDIeqyHcp6+MBRzIE= X-Google-Smtp-Source: AMsMyM5HlDloR0Q6E7bhHNQx3py83nBExasTaOUqKPuAI2ERKE88ypL9nf6ynZrpI0Y72+pqvQzwZg== X-Received: by 2002:a05:622a:190b:b0:35b:b5a2:eceb with SMTP id w11-20020a05622a190b00b0035bb5a2ecebmr25196qtc.529.1664404432585; Wed, 28 Sep 2022 15:33:52 -0700 (PDT) Received: from stbirv-lnx-3.igp.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id j188-20020a37b9c5000000b006bb83c2be40sm3963481qkf.59.2022.09.28.15.33.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Sep 2022 15:33:52 -0700 (PDT) From: Doug Berger To: Andrew Morton Cc: Jonathan Corbet , Mike Rapoport , Borislav Petkov , "Paul E. McKenney" , Neeraj Upadhyay , Randy Dunlap , Damien Le Moal , Muchun Song , KOSAKI Motohiro , Mel Gorman , Mike Kravetz , Florian Fainelli , David Hildenbrand , Oscar Salvador , Michal Hocko , Joonsoo Kim , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Doug Berger Subject: [PATCH v2 3/9] mm/page_alloc: calculate node_spanned_pages from pfns Date: Wed, 28 Sep 2022 15:32:55 -0700 Message-Id: <20220928223301.375229-4-opendmb@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220928223301.375229-1-opendmb@gmail.com> References: <20220928223301.375229-1-opendmb@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Since the start and end pfns of the node are passed as arguments to calculate_node_totalpages() they might as well be used to specify the node_spanned_pages value for the node rather than accumulating the spans of member zones. This prevents the need for additional adjustments if zones are allowed to overlap. Signed-off-by: Doug Berger --- mm/page_alloc.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e5486d47406e..3412d644c230 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -7452,7 +7452,7 @@ static void __init calculate_node_totalpages(struct p= glist_data *pgdat, unsigned long node_start_pfn, unsigned long node_end_pfn) { - unsigned long realtotalpages =3D 0, totalpages =3D 0; + unsigned long realtotalpages =3D 0; enum zone_type i; =20 for (i =3D 0; i < MAX_NR_ZONES; i++) { @@ -7483,11 +7483,10 @@ static void __init calculate_node_totalpages(struct= pglist_data *pgdat, zone->present_early_pages =3D real_size; #endif =20 - totalpages +=3D size; realtotalpages +=3D real_size; } =20 - pgdat->node_spanned_pages =3D totalpages; + pgdat->node_spanned_pages =3D node_end_pfn - node_start_pfn; pgdat->node_present_pages =3D realtotalpages; pr_debug("On node %d totalpages: %lu\n", pgdat->node_id, realtotalpages); } --=20 2.25.1 From nobody Tue May 7 20:06:38 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8EC09C32771 for ; Wed, 28 Sep 2022 22:34:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234663AbiI1WeQ (ORCPT ); Wed, 28 Sep 2022 18:34:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32826 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234630AbiI1WeG (ORCPT ); Wed, 28 Sep 2022 18:34:06 -0400 Received: from mail-qv1-xf32.google.com (mail-qv1-xf32.google.com [IPv6:2607:f8b0:4864:20::f32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1E76FF9629; Wed, 28 Sep 2022 15:33:56 -0700 (PDT) Received: by mail-qv1-xf32.google.com with SMTP id w9so3199247qvn.11; Wed, 28 Sep 2022 15:33:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=Y1gY+TTTvQU628l9QGecp/PtNkThqCYkuVMTpcWzNvU=; b=BqYh+xbtP+MiEfognlviz93t2UYTYbdkBqOta6P6qz05hPu2VXyNpJC6ZPNULV48aV R7ix3iOWaCRHG86HGLxJOkOqX/k5tkF1yhFvp7trfzvr0tkFqw6il3mSg1zFJq9cGzrl o6my3cBgPsq0pFc3Ib7FFLp9Ce+8nvoZVzr+yWG/M+XojQe2YfkMiMH2x48UuO+Jq0kf LySWCGNlbEk6r/C2v8x/crAodPNpp4U5MuIaDgbMfYbRraHdNmJMNeg3sA9TnaNV5H6B 3+tTIPJ7STDIM6FZsrlOKaKFPsGy72ghWhyrMA3E0+WLnlE1E5MXJip0lp+GLAdIqCIm yCkQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=Y1gY+TTTvQU628l9QGecp/PtNkThqCYkuVMTpcWzNvU=; b=dg60HfXDrsqgb5WMcqI3cw0/yY/CBrzokHJCZjHi+eE2CbyXrGDjw+4Zul8C7Es+kv KtMVKG0ukN0uux2+Tsl7bL6LD7LAmp7zflOl6PLaIL7V4EQ86y7JV9eh/l9riOBFJy2N ud/Uhzevyoy4efJZk1mF0I2O0RgL+M/MO4+BUy1mfdv34Tlc964VXkgKI1QrJ/sj1WgD zxVr2zm31MkgfsXxRn5ULFPShMw+9OglTKWTe3wPc1uV5Yfu/DD7J/uErK/0V1xX1+VI HJxh/Oby2qJ0K2lUFkJpDaap5e2omsjx6EN/x605EihujMT0vwJMxxgODZdWqfNrQf// 7ULA== X-Gm-Message-State: ACrzQf2QdlBP+WdZ/yEtnnRFpU8kSX2KHYIfMHudhuX1rsE/a5svnCue Hb4sxLR9qstS07W75n/76Ts= X-Google-Smtp-Source: AMsMyM7sRRj9qRUG7RZy9MMEt2bnHdiX8XuLyY5K1L0DMOgv9jkZopsYpvfjf6WZb2+cG3qtbuVi9w== X-Received: by 2002:a0c:9a4c:0:b0:4a4:3ad8:3c28 with SMTP id q12-20020a0c9a4c000000b004a43ad83c28mr205895qvd.124.1664404435298; Wed, 28 Sep 2022 15:33:55 -0700 (PDT) Received: from stbirv-lnx-3.igp.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id j188-20020a37b9c5000000b006bb83c2be40sm3963481qkf.59.2022.09.28.15.33.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Sep 2022 15:33:54 -0700 (PDT) From: Doug Berger To: Andrew Morton Cc: Jonathan Corbet , Mike Rapoport , Borislav Petkov , "Paul E. McKenney" , Neeraj Upadhyay , Randy Dunlap , Damien Le Moal , Muchun Song , KOSAKI Motohiro , Mel Gorman , Mike Kravetz , Florian Fainelli , David Hildenbrand , Oscar Salvador , Michal Hocko , Joonsoo Kim , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Doug Berger Subject: [PATCH v2 4/9] mm/page_alloc.c: allow oversized movablecore Date: Wed, 28 Sep 2022 15:32:56 -0700 Message-Id: <20220928223301.375229-5-opendmb@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220928223301.375229-1-opendmb@gmail.com> References: <20220928223301.375229-1-opendmb@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Now that the error in computation of corepages has been corrected by commit 9fd745d450e7 ("mm: fix overflow in find_zone_movable_pfns_for_nodes()"), oversized specifications of movablecore will result in a zero value for required_kernelcore if it is not also specified. It is unintuitive for such a request to lead to no ZONE_MOVABLE memory when the kernel parameters are clearly requesting some. The current behavior when requesting an oversized kernelcore is to classify all of the pages in movable_zone as kernelcore. The new behavior when requesting an oversized movablecore (when not also specifying kernelcore) is to similarly classify all of the pages in movable_zone as movablecore. Signed-off-by: Doug Berger --- mm/page_alloc.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 3412d644c230..81f97c5ed080 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8041,13 +8041,13 @@ static void __init find_zone_movable_pfns_for_nodes= (void) corepages =3D totalpages - required_movablecore; =20 required_kernelcore =3D max(required_kernelcore, corepages); + } else if (!required_kernelcore) { + /* If kernelcore was not specified, there is no ZONE_MOVABLE */ + goto out; } =20 - /* - * If kernelcore was not specified or kernelcore size is larger - * than totalpages, there is no ZONE_MOVABLE. - */ - if (!required_kernelcore || required_kernelcore >=3D totalpages) + /* If kernelcore size exceeds totalpages, there is no ZONE_MOVABLE */ + if (required_kernelcore >=3D totalpages) goto out; =20 /* usable_startpfn is the lowest possible pfn ZONE_MOVABLE can be at */ --=20 2.25.1 From nobody Tue May 7 20:06:38 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA744C6FA86 for ; Wed, 28 Sep 2022 22:34:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234644AbiI1Wee (ORCPT ); Wed, 28 Sep 2022 18:34:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32798 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234692AbiI1WeK (ORCPT ); Wed, 28 Sep 2022 18:34:10 -0400 Received: from mail-qv1-xf2c.google.com (mail-qv1-xf2c.google.com [IPv6:2607:f8b0:4864:20::f2c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C9DC2FAC74; Wed, 28 Sep 2022 15:33:59 -0700 (PDT) Received: by mail-qv1-xf2c.google.com with SMTP id s13so9015177qvq.10; Wed, 28 Sep 2022 15:33:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=lhtWJhsd4j1WsvrD24jP2B8/duCGXddB3TQ2GRCgsxg=; b=K4Q7s1SxcUiklLSpEe6hESiZP1eOwlpB06nDUMzWfLR+0hfwHBSsDc9vw59nU6aEJj fuU58WUvQGm9lOcNc70XihFXQHcDbjXpCyfRDy2Yj10m+KOk1le/1KJ5jGnO2/drUJNV 6/w5oHj4j7ksI1PvbfmNogyb+2VAXBylqvHHytrxEPT6B9tqreUV6nx7JxfuMcSx1mg9 udUN//BAAEMQCXBmrh/9DmVfqDgnInPV/jLTGI6qM30DD1crAkg5uZsVTBCOviHGTcAp iByb2ebeoASWVGlb1e4wklMEohoA3Kf3sXcc76Vlm2ERoj9cZ8IF+PZ5jHcsAhAgaAvh BJ7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=lhtWJhsd4j1WsvrD24jP2B8/duCGXddB3TQ2GRCgsxg=; b=nMbZmndGPJVkXxyRaDC6j8HhN7y3GbxEostCjJWBNI6BoQFWEAcwt1UWepRM3TbV00 3/xmGeMOLGlf2PIyyXHsfMb97HAKKdc5X7AsdL4zUprYkXu2SBTVrtBvHzUPtXE7xvP8 TBA/ix1eo8wN85sR7eRwoML4sDkPu6Huaq7mZHVnsMxrl8U6F3E6dQ37UE1YbCceR2pH XMORJBYuD9PPTVDbXuZRIZtU4PRf34GgulLEiCcgnGv5k1XfuFhmC74LOG8/iAiTfHgD NwSELYhEPjtdcr3kNJISOY5bx532W08faUVEXXoCzikCqA2HU5vGiaZXyMrRzzqnSFWB 6lPA== X-Gm-Message-State: ACrzQf20i58Q5BBWWk1OAH69+lwU3+rjFfMPJbuIKGXXLx8OjcLzFJ8v akuXRtJIrWVWvaZdO1rg9fI= X-Google-Smtp-Source: AMsMyM6hndsIdtDYd0/y5nc1hr5JoFumCrEoRQfIpyGeSDegaAZLd45AZZ0iEbDObJ88KUO8KKuesA== X-Received: by 2002:a0c:b295:0:b0:496:b91a:f5f4 with SMTP id r21-20020a0cb295000000b00496b91af5f4mr234203qve.20.1664404438041; Wed, 28 Sep 2022 15:33:58 -0700 (PDT) Received: from stbirv-lnx-3.igp.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id j188-20020a37b9c5000000b006bb83c2be40sm3963481qkf.59.2022.09.28.15.33.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Sep 2022 15:33:57 -0700 (PDT) From: Doug Berger To: Andrew Morton Cc: Jonathan Corbet , Mike Rapoport , Borislav Petkov , "Paul E. McKenney" , Neeraj Upadhyay , Randy Dunlap , Damien Le Moal , Muchun Song , KOSAKI Motohiro , Mel Gorman , Mike Kravetz , Florian Fainelli , David Hildenbrand , Oscar Salvador , Michal Hocko , Joonsoo Kim , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Doug Berger Subject: [PATCH v2 5/9] mm/page_alloc: introduce init_reserved_pageblock() Date: Wed, 28 Sep 2022 15:32:57 -0700 Message-Id: <20220928223301.375229-6-opendmb@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220928223301.375229-1-opendmb@gmail.com> References: <20220928223301.375229-1-opendmb@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Most of the implementation of init_cma_reserved_pageblock() is common to the initialization of any reserved pageblock for use by the page allocator. This commit breaks that functionality out into the new common function init_reserved_pageblock() for use by code other than CMA. The CMA specific code is relocated from page_alloc to the point where init_cma_reserved_pageblock() was invoked and the new function is used there instead. The error path is also updated to use the function to operate on pageblocks rather than pages. Signed-off-by: Doug Berger --- include/linux/gfp.h | 5 +---- mm/cma.c | 15 +++++++++++---- mm/page_alloc.c | 8 ++------ 3 files changed, 14 insertions(+), 14 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index f314be58fa77..71ed687be406 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -367,9 +367,6 @@ extern struct page *alloc_contig_pages(unsigned long nr= _pages, gfp_t gfp_mask, #endif void free_contig_range(unsigned long pfn, unsigned long nr_pages); =20 -#ifdef CONFIG_CMA -/* CMA stuff */ -extern void init_cma_reserved_pageblock(struct page *page); -#endif +extern void init_reserved_pageblock(struct page *page); =20 #endif /* __LINUX_GFP_H */ diff --git a/mm/cma.c b/mm/cma.c index 4a978e09547a..6208a3e1cd9d 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -31,6 +31,7 @@ #include #include #include +#include #include =20 #include "cma.h" @@ -116,8 +117,13 @@ static void __init cma_activate_area(struct cma *cma) } =20 for (pfn =3D base_pfn; pfn < base_pfn + cma->count; - pfn +=3D pageblock_nr_pages) - init_cma_reserved_pageblock(pfn_to_page(pfn)); + pfn +=3D pageblock_nr_pages) { + struct page *page =3D pfn_to_page(pfn); + + set_pageblock_migratetype(page, MIGRATE_CMA); + init_reserved_pageblock(page); + page_zone(page)->cma_pages +=3D pageblock_nr_pages; + } =20 spin_lock_init(&cma->lock); =20 @@ -133,8 +139,9 @@ static void __init cma_activate_area(struct cma *cma) out_error: /* Expose all pages to the buddy, they are useless for CMA. */ if (!cma->reserve_pages_on_error) { - for (pfn =3D base_pfn; pfn < base_pfn + cma->count; pfn++) - free_reserved_page(pfn_to_page(pfn)); + for (pfn =3D base_pfn; pfn < base_pfn + cma->count; + pfn +=3D pageblock_nr_pages) + init_reserved_pageblock(pfn_to_page(pfn)); } totalcma_pages -=3D cma->count; cma->count =3D 0; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 81f97c5ed080..6d4470b0daba 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2302,9 +2302,8 @@ void __init page_alloc_init_late(void) set_zone_contiguous(zone); } =20 -#ifdef CONFIG_CMA -/* Free whole pageblock and set its migration type to MIGRATE_CMA. */ -void __init init_cma_reserved_pageblock(struct page *page) +/* Free whole pageblock */ +void __init init_reserved_pageblock(struct page *page) { unsigned i =3D pageblock_nr_pages; struct page *p =3D page; @@ -2314,14 +2313,11 @@ void __init init_cma_reserved_pageblock(struct page= *page) set_page_count(p, 0); } while (++p, --i); =20 - set_pageblock_migratetype(page, MIGRATE_CMA); set_page_refcounted(page); __free_pages(page, pageblock_order); =20 adjust_managed_page_count(page, pageblock_nr_pages); - page_zone(page)->cma_pages +=3D pageblock_nr_pages; } -#endif =20 /* * The order of subdivision here is critical for the IO subsystem. --=20 2.25.1 From nobody Tue May 7 20:06:38 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B946C32771 for ; Wed, 28 Sep 2022 22:34:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232958AbiI1Wei (ORCPT ); Wed, 28 Sep 2022 18:34:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33060 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234628AbiI1WeM (ORCPT ); Wed, 28 Sep 2022 18:34:12 -0400 Received: from mail-qt1-x834.google.com (mail-qt1-x834.google.com [IPv6:2607:f8b0:4864:20::834]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 454C9FB333; Wed, 28 Sep 2022 15:34:02 -0700 (PDT) Received: by mail-qt1-x834.google.com with SMTP id a20so8859725qtw.10; Wed, 28 Sep 2022 15:34:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=5WWAAzNcGRX09yf3QcgLeiV4ytIgw0QIAz2zMov/ZDY=; b=XG34Y6tDaJWwx6DqjTOFzsu9FQfdW30BYf0FXo1qOyK2ddb1gPjKGPuXlnXn7WJbbp 3at5ITh2zqc5ND+psxUlpktCRqb1IAzsURp8laAwU3meMWmslVlJ04PurbIm8TjZW067 nvbJ2vgFDTno5HMU+Gfi0PWAZYLTaVzf8H1Dyc6ZNAhhNr+6tXQhmbbkKh+4XQNl5zF1 K+hxIbMy/hJCcqrK4JjzhlxcdtKPFgihmjVSBLcdQYwclydS6XWbaKLgt1yT+hrAmziP mvTMSR2mIOdkNftizVll21ORt676/ts4LNcVaPrB5p5YHLQb1rddtODGPalGlxkiWHGt FAtg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=5WWAAzNcGRX09yf3QcgLeiV4ytIgw0QIAz2zMov/ZDY=; b=Eb0ykG2QwAMxlMzJHSd+WHk/y+BZoyHXyw8BoQB4TABcgS8QkyLqpCh5NtLmSs36Cb zemF950YB0+p6jaApVupXGZHERtguFGAa8fusD8pKCLSdxyZJdeJy4J1ts3QVqrZfXPv ZKyB9FNeXwl6DNySGtdTal3fLtv/tK8/DRr9UaBfDxJ9MWsTe+VGt2kCdu0cLvat7LjW Q+DFXb4RQFSCEAp7vnmZvZ3tafNuclGDT5u6lWMsDHJJCaNmgv9cB1vIb05ASAuGCQvR KR4kuNM5ytjkrdVfH/24heBxQ+tr3zQRMJJEbIrdE8VHxE1+y1EjSjPiolk8Y0zE5sM0 SRvA== X-Gm-Message-State: ACrzQf2IjVpMyHfC3duP0V/wNNVow5jJ5RWQw0IrLNbY/PfJxU9xgwPx jQm8ZrJBpztupomup6YAwQw= X-Google-Smtp-Source: AMsMyM5hls3k73DLDHwmozK6Fn+6fum9LVyQsQgOKR/Uau82IHif8GbeTAVnzUtgkLXNeRQjKZrgzg== X-Received: by 2002:ac8:584c:0:b0:35c:ceee:197a with SMTP id h12-20020ac8584c000000b0035cceee197amr50280qth.662.1664404440788; Wed, 28 Sep 2022 15:34:00 -0700 (PDT) Received: from stbirv-lnx-3.igp.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id j188-20020a37b9c5000000b006bb83c2be40sm3963481qkf.59.2022.09.28.15.33.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Sep 2022 15:34:00 -0700 (PDT) From: Doug Berger To: Andrew Morton Cc: Jonathan Corbet , Mike Rapoport , Borislav Petkov , "Paul E. McKenney" , Neeraj Upadhyay , Randy Dunlap , Damien Le Moal , Muchun Song , KOSAKI Motohiro , Mel Gorman , Mike Kravetz , Florian Fainelli , David Hildenbrand , Oscar Salvador , Michal Hocko , Joonsoo Kim , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Doug Berger Subject: [PATCH v2 6/9] memblock: introduce MEMBLOCK_MOVABLE flag Date: Wed, 28 Sep 2022 15:32:58 -0700 Message-Id: <20220928223301.375229-7-opendmb@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220928223301.375229-1-opendmb@gmail.com> References: <20220928223301.375229-1-opendmb@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The MEMBLOCK_MOVABLE flag is introduced to designate a memblock as only supporting movable allocations by the page allocator. Signed-off-by: Doug Berger --- include/linux/memblock.h | 8 ++++++++ mm/memblock.c | 24 ++++++++++++++++++++++++ 2 files changed, 32 insertions(+) diff --git a/include/linux/memblock.h b/include/linux/memblock.h index 50ad19662a32..8eb3ca32dfa7 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -47,6 +47,7 @@ enum memblock_flags { MEMBLOCK_MIRROR =3D 0x2, /* mirrored region */ MEMBLOCK_NOMAP =3D 0x4, /* don't add to kernel direct mapping */ MEMBLOCK_DRIVER_MANAGED =3D 0x8, /* always detected via a driver */ + MEMBLOCK_MOVABLE =3D 0x10, /* designated movable block */ }; =20 /** @@ -125,6 +126,8 @@ int memblock_clear_hotplug(phys_addr_t base, phys_addr_= t size); int memblock_mark_mirror(phys_addr_t base, phys_addr_t size); int memblock_mark_nomap(phys_addr_t base, phys_addr_t size); int memblock_clear_nomap(phys_addr_t base, phys_addr_t size); +int memblock_mark_movable(phys_addr_t base, phys_addr_t size); +int memblock_clear_movable(phys_addr_t base, phys_addr_t size); =20 void memblock_free_all(void); void memblock_free(void *ptr, size_t size); @@ -265,6 +268,11 @@ static inline bool memblock_is_driver_managed(struct m= emblock_region *m) return m->flags & MEMBLOCK_DRIVER_MANAGED; } =20 +static inline bool memblock_is_movable(struct memblock_region *m) +{ + return m->flags & MEMBLOCK_MOVABLE; +} + int memblock_search_pfn_nid(unsigned long pfn, unsigned long *start_pfn, unsigned long *end_pfn); void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn, diff --git a/mm/memblock.c b/mm/memblock.c index b5d3026979fc..5d6a210d98ec 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -979,6 +979,30 @@ int __init_memblock memblock_clear_nomap(phys_addr_t b= ase, phys_addr_t size) return memblock_setclr_flag(base, size, 0, MEMBLOCK_NOMAP); } =20 +/** + * memblock_mark_movable - Mark designated movable block with MEMBLOCK_MOV= ABLE. + * @base: the base phys addr of the region + * @size: the size of the region + * + * Return: 0 on success, -errno on failure. + */ +int __init_memblock memblock_mark_movable(phys_addr_t base, phys_addr_t si= ze) +{ + return memblock_setclr_flag(base, size, 1, MEMBLOCK_MOVABLE); +} + +/** + * memblock_clear_movable - Clear flag MEMBLOCK_MOVABLE for a specified re= gion. + * @base: the base phys addr of the region + * @size: the size of the region + * + * Return: 0 on success, -errno on failure. + */ +int __init_memblock memblock_clear_movable(phys_addr_t base, phys_addr_t s= ize) +{ + return memblock_setclr_flag(base, size, 0, MEMBLOCK_MOVABLE); +} + static bool should_skip_region(struct memblock_type *type, struct memblock_region *m, int nid, int flags) --=20 2.25.1 From nobody Tue May 7 20:06:38 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DDA47C6FA82 for ; Wed, 28 Sep 2022 22:34:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234733AbiI1Wew (ORCPT ); Wed, 28 Sep 2022 18:34:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32986 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234582AbiI1WeZ (ORCPT ); Wed, 28 Sep 2022 18:34:25 -0400 Received: from mail-qt1-x836.google.com (mail-qt1-x836.google.com [IPv6:2607:f8b0:4864:20::836]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 137E3FE66A; Wed, 28 Sep 2022 15:34:05 -0700 (PDT) Received: by mail-qt1-x836.google.com with SMTP id y2so8878999qtv.5; Wed, 28 Sep 2022 15:34:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=DEXyAVVZTwjXHLz6TLbcaFBwG1eIHET55TmuRMjtAC8=; b=mvoi09XGyZecBM4bUAQhNKHLRNMAyjZJsvIEtBFhl7CXvFbJQphsG5lNcE/3eysAFk VKG/T0mz3affytowEU/UjtHjYhPkHvVj0Ai+/tnp6AzL+u41WXFqCtnfd7YCm3i0Af7z dei199HOvzGCsOGRNVMy3UY3PaG1EO77MtGg7AdyhyxFmT6UvboWu+pp5fIyO/aiL3oG BLf963tZWwtxHm837fxn0XLrI5bpyjmbVUjlGQV+p11qUHjDRsHJW6IOIJ2FkU3H3wRa elmtE8tr24QaF94RaAPb2ahqb0KQN+6m61Z6yRcXYTLeb54tH6tuMmFj9Tdb4+l7s9gp L39A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=DEXyAVVZTwjXHLz6TLbcaFBwG1eIHET55TmuRMjtAC8=; b=2V4gRh1eu9XYPegYfqh2TOyteXnEhXJ+Ulu76r54CBnPZLA3BxBK8kjnbm3ZWK3S5L F9xHvYENrKI8SqkNTGo8X02OpXVV7r9KNQ9O3dZwWyH35aS5yxcl2WMNgM6+T68Qa38y e7x1uZbcl4T1DX20SbIr6hjXbLizIns9MJ1qLMOKFAKd/kI3mMIuzQEQ4ECq2w81IHzK 2Sr9aOvM+szJim8OLb+pxqWFKe48Ur1YjKqwTDFXsvDXfmTSnlAjUkT5IbPU/govInNZ 1hFH/24k+dkf0y6wzzD1W907jQOXv2opcjqFJxjUgrpd/mEy24eCqyOELHg37OfvJTxV g81A== X-Gm-Message-State: ACrzQf0AT8weTlS3r83WfaqeoLHvgbb8suqhYLeOPXhaywGZD9PYiDqq 3GZKAMKDuv0HXQOz5AU8/a2yANSGPMA= X-Google-Smtp-Source: AMsMyM4zK5Ad43j5BEB/FAjyYbGE4juEGhx6jkVPsOw3+Vy3Yucdoes2I5POjKQyImNAwv8ia7KmKg== X-Received: by 2002:ac8:5750:0:b0:35c:cb1b:2639 with SMTP id 16-20020ac85750000000b0035ccb1b2639mr41854qtx.183.1664404443601; Wed, 28 Sep 2022 15:34:03 -0700 (PDT) Received: from stbirv-lnx-3.igp.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id j188-20020a37b9c5000000b006bb83c2be40sm3963481qkf.59.2022.09.28.15.34.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Sep 2022 15:34:03 -0700 (PDT) From: Doug Berger To: Andrew Morton Cc: Jonathan Corbet , Mike Rapoport , Borislav Petkov , "Paul E. McKenney" , Neeraj Upadhyay , Randy Dunlap , Damien Le Moal , Muchun Song , KOSAKI Motohiro , Mel Gorman , Mike Kravetz , Florian Fainelli , David Hildenbrand , Oscar Salvador , Michal Hocko , Joonsoo Kim , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Doug Berger Subject: [PATCH v2 7/9] mm/dmb: Introduce Designated Movable Blocks Date: Wed, 28 Sep 2022 15:32:59 -0700 Message-Id: <20220928223301.375229-8-opendmb@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220928223301.375229-1-opendmb@gmail.com> References: <20220928223301.375229-1-opendmb@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Designated Movable Blocks are blocks of memory that are composed of one or more adjacent memblocks that have the MEMBLOCK_MOVABLE designation. These blocks must be reserved before receiving that designation and will be located in the ZONE_MOVABLE zone rather than any other zone that may span them. Signed-off-by: Doug Berger --- include/linux/dmb.h | 29 +++++++++++++++ mm/Kconfig | 12 ++++++ mm/Makefile | 1 + mm/dmb.c | 91 +++++++++++++++++++++++++++++++++++++++++++++ mm/memblock.c | 6 ++- mm/page_alloc.c | 84 ++++++++++++++++++++++++++++++++++------- 6 files changed, 209 insertions(+), 14 deletions(-) create mode 100644 include/linux/dmb.h create mode 100644 mm/dmb.c diff --git a/include/linux/dmb.h b/include/linux/dmb.h new file mode 100644 index 000000000000..fa2976c0fa21 --- /dev/null +++ b/include/linux/dmb.h @@ -0,0 +1,29 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __DMB_H__ +#define __DMB_H__ + +#include + +/* + * the buddy -- especially pageblock merging and alloc_contig_range() + * -- can deal with only some pageblocks of a higher-order page being + * MIGRATE_MOVABLE, we can use pageblock_nr_pages. + */ +#define DMB_MIN_ALIGNMENT_PAGES pageblock_nr_pages +#define DMB_MIN_ALIGNMENT_BYTES (PAGE_SIZE * DMB_MIN_ALIGNMENT_PAGES) + +enum { + DMB_DISJOINT =3D 0, + DMB_INTERSECTS, + DMB_MIXED, +}; + +struct dmb; + +extern int dmb_intersects(unsigned long spfn, unsigned long epfn); + +extern int dmb_reserve(phys_addr_t base, phys_addr_t size, + struct dmb **res_dmb); +extern void dmb_init_region(struct memblock_region *region); + +#endif diff --git a/mm/Kconfig b/mm/Kconfig index 0331f1461f81..7739edde5d4d 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -868,6 +868,18 @@ config CMA_AREAS =20 If unsure, leave the default value "7" in UMA and "19" in NUMA. =20 +config DMB_COUNT + int "Maximum count of Designated Movable Blocks" + default 19 if NUMA + default 7 + help + Designated Movable Blocks are blocks of memory that can be used + by the page allocator exclusively for movable pages. They are + managed in ZONE_MOVABLE but may overlap with other zones. This + parameter sets the maximum number of DMBs in the system. + + If unsure, leave the default value "7" in UMA and "19" in NUMA. + config MEM_SOFT_DIRTY bool "Track memory changes" depends on CHECKPOINT_RESTORE && HAVE_ARCH_SOFT_DIRTY && PROC_FS diff --git a/mm/Makefile b/mm/Makefile index 9a564f836403..d0b469a494f2 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -67,6 +67,7 @@ obj-y +=3D page-alloc.o obj-y +=3D init-mm.o obj-y +=3D memblock.o obj-y +=3D $(memory-hotplug-y) +obj-y +=3D dmb.o =20 ifdef CONFIG_MMU obj-$(CONFIG_ADVISE_SYSCALLS) +=3D madvise.o diff --git a/mm/dmb.c b/mm/dmb.c new file mode 100644 index 000000000000..f6c4e2662e0f --- /dev/null +++ b/mm/dmb.c @@ -0,0 +1,91 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Designated Movable Block + */ + +#define pr_fmt(fmt) "dmb: " fmt + +#include + +struct dmb { + unsigned long start_pfn; + unsigned long end_pfn; +}; + +static struct dmb dmb_areas[CONFIG_DMB_COUNT]; +static unsigned int dmb_area_count; + +int dmb_intersects(unsigned long spfn, unsigned long epfn) +{ + int i; + struct dmb *dmb; + + if (spfn >=3D epfn) + return DMB_DISJOINT; + + for (i =3D 0; i < dmb_area_count; i++) { + dmb =3D &dmb_areas[i]; + if (spfn >=3D dmb->end_pfn) + continue; + if (epfn <=3D dmb->start_pfn) + return DMB_DISJOINT; + if (spfn >=3D dmb->start_pfn && epfn <=3D dmb->end_pfn) + return DMB_INTERSECTS; + else + return DMB_MIXED; + } + + return DMB_DISJOINT; +} +EXPORT_SYMBOL(dmb_intersects); + +int __init dmb_reserve(phys_addr_t base, phys_addr_t size, + struct dmb **res_dmb) +{ + struct dmb *dmb; + + /* Sanity checks */ + if (!size || !memblock_is_region_reserved(base, size)) + return -EINVAL; + + /* ensure minimal alignment required by mm core */ + if (!IS_ALIGNED(base | size, DMB_MIN_ALIGNMENT_BYTES)) + return -EINVAL; + + if (dmb_area_count =3D=3D ARRAY_SIZE(dmb_areas)) { + pr_warn("Not enough slots for DMB reserved regions!\n"); + return -ENOSPC; + } + + /* + * Each reserved area must be initialised later, when more kernel + * subsystems (like slab allocator) are available. + */ + dmb =3D &dmb_areas[dmb_area_count++]; + + dmb->start_pfn =3D PFN_DOWN(base); + dmb->end_pfn =3D PFN_DOWN(base + size); + if (res_dmb) + *res_dmb =3D dmb; + + memblock_mark_movable(base, size); + return 0; +} + +void __init dmb_init_region(struct memblock_region *region) +{ + unsigned long pfn; + int i; + + for (pfn =3D memblock_region_memory_base_pfn(region); + pfn < memblock_region_memory_end_pfn(region); + pfn +=3D pageblock_nr_pages) { + struct page *page =3D pfn_to_page(pfn); + + for (i =3D 0; i < pageblock_nr_pages; i++) + set_page_zone(page + i, ZONE_MOVABLE); + + /* free reserved pageblocks to page allocator */ + init_reserved_pageblock(page); + } +} diff --git a/mm/memblock.c b/mm/memblock.c index 5d6a210d98ec..9eb91acdeb75 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -16,6 +16,7 @@ #include #include #include +#include =20 #include #include @@ -2090,13 +2091,16 @@ static void __init memmap_init_reserved_pages(void) for_each_reserved_mem_range(i, &start, &end) reserve_bootmem_region(start, end); =20 - /* and also treat struct pages for the NOMAP regions as PageReserved */ for_each_mem_region(region) { + /* treat struct pages for the NOMAP regions as PageReserved */ if (memblock_is_nomap(region)) { start =3D region->base; end =3D start + region->size; reserve_bootmem_region(start, end); } + /* move Designated Movable Block pages to ZONE_MOVABLE */ + if (memblock_is_movable(region)) + dmb_init_region(region); } } =20 diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 6d4470b0daba..cd31f26b0d21 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -75,6 +75,7 @@ #include #include #include +#include #include #include #include @@ -433,6 +434,7 @@ static unsigned long required_kernelcore __initdata; static unsigned long required_kernelcore_percent __initdata; static unsigned long required_movablecore __initdata; static unsigned long required_movablecore_percent __initdata; +static unsigned long min_dmb_pfn[MAX_NUMNODES] __initdata; static unsigned long zone_movable_pfn[MAX_NUMNODES] __initdata; bool mirrored_kernelcore __initdata_memblock; =20 @@ -2165,7 +2167,7 @@ static int __init deferred_init_memmap(void *data) } zone_empty: /* Sanity check that the next zone really is unpopulated */ - WARN_ON(++zid < MAX_NR_ZONES && populated_zone(++zone)); + WARN_ON(++zid < ZONE_MOVABLE && populated_zone(++zone)); =20 pr_info("node %d deferred pages initialised in %ums\n", pgdat->node_id, jiffies_to_msecs(jiffies - start)); @@ -6899,6 +6901,10 @@ static void __init memmap_init_zone_range(struct zon= e *zone, unsigned long zone_end_pfn =3D zone_start_pfn + zone->spanned_pages; int nid =3D zone_to_nid(zone), zone_id =3D zone_idx(zone); =20 + /* Skip overlap of ZONE_MOVABLE */ + if (zone_id =3D=3D ZONE_MOVABLE && zone_start_pfn < *hole_pfn) + zone_start_pfn =3D *hole_pfn; + start_pfn =3D clamp(start_pfn, zone_start_pfn, zone_end_pfn); end_pfn =3D clamp(end_pfn, zone_start_pfn, zone_end_pfn); =20 @@ -7348,6 +7354,9 @@ static unsigned long __init zone_spanned_pages_in_nod= e(int nid, node_start_pfn, node_end_pfn, zone_start_pfn, zone_end_pfn); =20 + if (zone_type =3D=3D ZONE_MOVABLE && min_dmb_pfn[nid]) + *zone_start_pfn =3D min(*zone_start_pfn, min_dmb_pfn[nid]); + /* Check that this node has pages within the zone's required range */ if (*zone_end_pfn < node_start_pfn || *zone_start_pfn > node_end_pfn) return 0; @@ -7416,12 +7425,17 @@ static unsigned long __init zone_absent_pages_in_no= de(int nid, &zone_start_pfn, &zone_end_pfn); nr_absent =3D __absent_pages_in_range(nid, zone_start_pfn, zone_end_pfn); =20 + if (zone_type =3D=3D ZONE_MOVABLE && min_dmb_pfn[nid]) { + zone_start_pfn =3D min(zone_start_pfn, min_dmb_pfn[nid]); + nr_absent +=3D zone_movable_pfn[nid] - zone_start_pfn; + } + /* * ZONE_MOVABLE handling. - * Treat pages to be ZONE_MOVABLE in ZONE_NORMAL as absent pages + * Treat pages to be ZONE_MOVABLE in other zones as absent pages * and vice versa. */ - if (mirrored_kernelcore && zone_movable_pfn[nid]) { + if (zone_movable_pfn[nid]) { unsigned long start_pfn, end_pfn; struct memblock_region *r; =20 @@ -7431,6 +7445,21 @@ static unsigned long __init zone_absent_pages_in_nod= e(int nid, end_pfn =3D clamp(memblock_region_memory_end_pfn(r), zone_start_pfn, zone_end_pfn); =20 + if (memblock_is_movable(r)) { + if (zone_type !=3D ZONE_MOVABLE) { + nr_absent +=3D end_pfn - start_pfn; + continue; + } + + end_pfn =3D min(end_pfn, zone_movable_pfn[nid]); + if (start_pfn < zone_movable_pfn[nid]) + nr_absent -=3D end_pfn - start_pfn; + continue; + } + + if (!mirrored_kernelcore) + continue; + if (zone_type =3D=3D ZONE_MOVABLE && memblock_is_mirror(r)) nr_absent +=3D end_pfn - start_pfn; @@ -7450,6 +7479,15 @@ static void __init calculate_node_totalpages(struct = pglist_data *pgdat, { unsigned long realtotalpages =3D 0; enum zone_type i; + int nid =3D pgdat->node_id; + + /* + * If Designated Movable Blocks are defined on this node, ensure that + * zone_movable_pfn is also defined for this node. + */ + if (min_dmb_pfn[nid] && !zone_movable_pfn[nid]) + zone_movable_pfn[nid] =3D min(node_end_pfn, + arch_zone_highest_possible_pfn[movable_zone]); =20 for (i =3D 0; i < MAX_NR_ZONES; i++) { struct zone *zone =3D pgdat->node_zones + i; @@ -7457,12 +7495,12 @@ static void __init calculate_node_totalpages(struct= pglist_data *pgdat, unsigned long spanned, absent; unsigned long size, real_size; =20 - spanned =3D zone_spanned_pages_in_node(pgdat->node_id, i, + spanned =3D zone_spanned_pages_in_node(nid, i, node_start_pfn, node_end_pfn, &zone_start_pfn, &zone_end_pfn); - absent =3D zone_absent_pages_in_node(pgdat->node_id, i, + absent =3D zone_absent_pages_in_node(nid, i, node_start_pfn, node_end_pfn); =20 @@ -7922,15 +7960,23 @@ unsigned long __init find_min_pfn_with_active_regio= ns(void) static unsigned long __init early_calculate_totalpages(void) { unsigned long totalpages =3D 0; - unsigned long start_pfn, end_pfn; - int i, nid; + struct memblock_region *r; =20 - for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) { - unsigned long pages =3D end_pfn - start_pfn; + for_each_mem_region(r) { + unsigned long start_pfn, end_pfn, pages; + int nid; =20 - totalpages +=3D pages; - if (pages) + nid =3D memblock_get_region_node(r); + start_pfn =3D memblock_region_memory_base_pfn(r); + end_pfn =3D memblock_region_memory_end_pfn(r); + + pages =3D end_pfn - start_pfn; + if (pages) { + totalpages +=3D pages; node_set_state(nid, N_MEMORY); + if (memblock_is_movable(r) && !min_dmb_pfn[nid]) + min_dmb_pfn[nid] =3D start_pfn; + } } return totalpages; } @@ -7943,7 +7989,7 @@ static unsigned long __init early_calculate_totalpage= s(void) */ static void __init find_zone_movable_pfns_for_nodes(void) { - int i, nid; + int nid; unsigned long usable_startpfn; unsigned long kernelcore_node, kernelcore_remaining; /* save the state before borrow the nodemask */ @@ -8071,13 +8117,24 @@ static void __init find_zone_movable_pfns_for_nodes= (void) kernelcore_remaining =3D kernelcore_node; =20 /* Go through each range of PFNs within this node */ - for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) { + for_each_mem_region(r) { unsigned long size_pages; =20 + if (memblock_get_region_node(r) !=3D nid) + continue; + + start_pfn =3D memblock_region_memory_base_pfn(r); + end_pfn =3D memblock_region_memory_end_pfn(r); start_pfn =3D max(start_pfn, zone_movable_pfn[nid]); if (start_pfn >=3D end_pfn) continue; =20 + /* Skip over Designated Movable Blocks */ + if (memblock_is_movable(r)) { + zone_movable_pfn[nid] =3D end_pfn; + continue; + } + /* Account for what is only usable for kernelcore */ if (start_pfn < usable_startpfn) { unsigned long kernel_pages; @@ -8226,6 +8283,7 @@ void __init free_area_init(unsigned long *max_zone_pf= n) } =20 /* Find the PFNs that ZONE_MOVABLE begins at in each node */ + memset(min_dmb_pfn, 0, sizeof(min_dmb_pfn)); memset(zone_movable_pfn, 0, sizeof(zone_movable_pfn)); find_zone_movable_pfns_for_nodes(); =20 --=20 2.25.1 From nobody Tue May 7 20:06:38 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F099C04A95 for ; Wed, 28 Sep 2022 22:35:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234645AbiI1We5 (ORCPT ); Wed, 28 Sep 2022 18:34:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33002 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234738AbiI1Wea (ORCPT ); Wed, 28 Sep 2022 18:34:30 -0400 Received: from mail-qv1-xf34.google.com (mail-qv1-xf34.google.com [IPv6:2607:f8b0:4864:20::f34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DA156FA0C1; Wed, 28 Sep 2022 15:34:07 -0700 (PDT) Received: by mail-qv1-xf34.google.com with SMTP id x13so1962461qvn.4; Wed, 28 Sep 2022 15:34:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=hndRuqLYmpsdza/I5YqR10YZRjB6Gw7lqCVZwv7bT84=; b=W5KTlkD6VDvTed2L2KfHwXo9j86RBLv0DNqupCjdd7UJXUFKgQ6ICn1v22QPaFF653 UUH7LRcXhW62qq39IHwi34e5XSq8nSnOM5SAyWJvUpLscbaqAkYsuFZsSYCAiBqDk+WE 12K8uhBSmyEFWms9uxioDGzI3h9oFNasFPBomC7bWYUMDQ+i5OMAZnByjvqWrIn0HhIf KJhuzCmgX/7mBYgrmftfQadDSto2kyNLE2imb9nYnmLFfllPp4mAvrPZGMDuVzxUOjDG 9lGghwppmTYcB6NOl9/Hb1f4XFaxXyms0ePckizh45ccJw6S8t2QhVjg7VgrvpJxSSdo T/Sw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=hndRuqLYmpsdza/I5YqR10YZRjB6Gw7lqCVZwv7bT84=; b=esRAE4OanpPfxp8gvyk58G/xU3ONRpdSpq/niBm+D4fgIAaHH/hYqbLncZFyJ6muUB T5juzBDaedZDjHX/+ac7VXT+D76RFZWUSwaZhv2OE93sEx2W8f2Dj1sMJqwaJQGSMIhb GbsghRVnmY76n1A7+hWeh2zVuriVoVbYePs1Sp0nug3yCqiHDrhLq3KpzA43nAQIK1BB j1OHn9P+Nd6IJ1/WOwEhjgBD384q+bhUoQ0hate+eWwaXYRoexYKh+Z7FjRwUm/4rEDW eHtpHQGKk7NFYpoGsOpersuiivhOtmlNLlO7wGlYT09MtgATT/uYMa2nkaqzzn3E+8Nj 4brg== X-Gm-Message-State: ACrzQf1AWCuj6aq457owTkfDeziKleIoptNQaUXygW2xN6crffOw1jfP FsSqiq6MPkZxxiM3jtWHi2I= X-Google-Smtp-Source: AMsMyM6Ccu/5uERbT7jxjfR+0FE1IFewQaesnxuWr2WVMdlCZf4bfvdaCH8iSFcEiEy/T6yQAHFDhA== X-Received: by 2002:ad4:4ea7:0:b0:4ad:656a:260c with SMTP id ed7-20020ad44ea7000000b004ad656a260cmr268848qvb.51.1664404446501; Wed, 28 Sep 2022 15:34:06 -0700 (PDT) Received: from stbirv-lnx-3.igp.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id j188-20020a37b9c5000000b006bb83c2be40sm3963481qkf.59.2022.09.28.15.34.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Sep 2022 15:34:05 -0700 (PDT) From: Doug Berger To: Andrew Morton Cc: Jonathan Corbet , Mike Rapoport , Borislav Petkov , "Paul E. McKenney" , Neeraj Upadhyay , Randy Dunlap , Damien Le Moal , Muchun Song , KOSAKI Motohiro , Mel Gorman , Mike Kravetz , Florian Fainelli , David Hildenbrand , Oscar Salvador , Michal Hocko , Joonsoo Kim , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Doug Berger Subject: [PATCH v2 8/9] mm/page_alloc: make alloc_contig_pages DMB aware Date: Wed, 28 Sep 2022 15:33:00 -0700 Message-Id: <20220928223301.375229-9-opendmb@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220928223301.375229-1-opendmb@gmail.com> References: <20220928223301.375229-1-opendmb@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Designated Movable Blocks are skipped when attempting to allocate contiguous pages. Doing per page validation across all spanned pages within a zone can be extra inefficient when Designated Movable Blocks create large overlaps between zones. Use dmb_intersects() within pfn_range_valid_contig as an early check to signal the range is not valid. The zone_movable_pfn array which represents the start of non- overlapped ZONE_MOVABLE on the node is now preserved to be used at runtime to skip over any DMB-only portion of the zone. Signed-off-by: Doug Berger --- mm/page_alloc.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index cd31f26b0d21..c07111a897c0 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -435,7 +435,7 @@ static unsigned long required_kernelcore_percent __init= data; static unsigned long required_movablecore __initdata; static unsigned long required_movablecore_percent __initdata; static unsigned long min_dmb_pfn[MAX_NUMNODES] __initdata; -static unsigned long zone_movable_pfn[MAX_NUMNODES] __initdata; +static unsigned long zone_movable_pfn[MAX_NUMNODES]; bool mirrored_kernelcore __initdata_memblock; =20 /* movable_zone is the "real" zone pages in ZONE_MOVABLE are taken from */ @@ -9369,6 +9369,9 @@ static bool pfn_range_valid_contig(struct zone *z, un= signed long start_pfn, unsigned long i, end_pfn =3D start_pfn + nr_pages; struct page *page; =20 + if (dmb_intersects(start_pfn, end_pfn)) + return false; + for (i =3D start_pfn; i < end_pfn; i++) { page =3D pfn_to_online_page(i); if (!page) @@ -9425,7 +9428,10 @@ struct page *alloc_contig_pages(unsigned long nr_pag= es, gfp_t gfp_mask, gfp_zone(gfp_mask), nodemask) { spin_lock_irqsave(&zone->lock, flags); =20 - pfn =3D ALIGN(zone->zone_start_pfn, nr_pages); + if (zone_idx(zone) =3D=3D ZONE_MOVABLE && zone_movable_pfn[nid]) + pfn =3D ALIGN(zone_movable_pfn[nid], nr_pages); + else + pfn =3D ALIGN(zone->zone_start_pfn, nr_pages); while (zone_spans_last_pfn(zone, pfn, nr_pages)) { if (pfn_range_valid_contig(zone, pfn, nr_pages)) { /* --=20 2.25.1 From nobody Tue May 7 20:06:38 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4F05C32771 for ; Wed, 28 Sep 2022 22:35:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234829AbiI1WfE (ORCPT ); Wed, 28 Sep 2022 18:35:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33060 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234517AbiI1Wed (ORCPT ); Wed, 28 Sep 2022 18:34:33 -0400 Received: from mail-qv1-xf2a.google.com (mail-qv1-xf2a.google.com [IPv6:2607:f8b0:4864:20::f2a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 20FD0F962D; Wed, 28 Sep 2022 15:34:10 -0700 (PDT) Received: by mail-qv1-xf2a.google.com with SMTP id u8so3412009qvv.9; Wed, 28 Sep 2022 15:34:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=Tjj6ATF6Canh9drI6EDoMALKIa5USqpwu3Z1hroTt1E=; b=kc8MnC2tZjfyT1OHr7oem13ar6CbGvXTPlsJPg4ls4ohROc83adATaBlEJCZezeH1C vsZ5oUUA+B9L0g+Bkk39PLzeubyp2DyfVoJoZ9Zuq2Qsb8+StameSMoVBPuMt3bVCnRL coy4RGa8VfE6Rbw622dqywhJ4WJzOSAvVN20qhVzwAACdHRDu9rFPEkPLP2ljJiJY43l IqGSh57dsBe0H2HHsWh1998DsIFN/Qm+GtfjxXU8olPYvmv6jkBMS834HsYzg10Atmro bL25Bin46Q5OK+V8UdIyXx7CaaLqS7R6BnZCiF4UnAN1AD9nXKL/uUXTGBwtWYMMWzbv oyAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=Tjj6ATF6Canh9drI6EDoMALKIa5USqpwu3Z1hroTt1E=; b=G8q4WO/BYf5SoZBqaQHHYm5OZMqzL+w6xkgrZA93HZaDfV3ozkGDnPIVxozaMNdeKQ Vi5VqQ69l2lqGDp0XAdwBvx5Urkm9Jh70f+nrs+SQ6DpGLuyl3ER804+vkOYbVM/HI+f P0Is2B3VzSWSMcEiwDYwlSRcv6IYVQTP3bK0Z7gXuVcZxFlYGCRVPlJZRjK/1qcYcpxJ +HeYVZ0Gjk8A3Agp+BXMkJ9Gft9/xHnxQm+X68n+lXF+Y2C+lme4yY9fdGtbEQ5Sre2a DHX6QB0G5XQ+/+fkIxKZ6Nqrv7W50IRCJfaol2xBaWwgZKYEdst5JMBFZOSel5eIZzaL WfOw== X-Gm-Message-State: ACrzQf0t2/jk69j5WYKqIA4T5oNgArJIiaLgh/ZKWYTKozwrQ51WAjLW SX/lxmE6MpwTAXXY7x2leH8= X-Google-Smtp-Source: AMsMyM6VoQtRRInmPkH9hx0tn+zQ9wF1eVVy8Cf8ySuI7caOUVeCsNTyQ/vn10yr9RGUky2CsTBgnQ== X-Received: by 2002:a05:6214:2301:b0:498:9f6f:28d with SMTP id gc1-20020a056214230100b004989f6f028dmr277150qvb.5.1664404449183; Wed, 28 Sep 2022 15:34:09 -0700 (PDT) Received: from stbirv-lnx-3.igp.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id j188-20020a37b9c5000000b006bb83c2be40sm3963481qkf.59.2022.09.28.15.34.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Sep 2022 15:34:08 -0700 (PDT) From: Doug Berger To: Andrew Morton Cc: Jonathan Corbet , Mike Rapoport , Borislav Petkov , "Paul E. McKenney" , Neeraj Upadhyay , Randy Dunlap , Damien Le Moal , Muchun Song , KOSAKI Motohiro , Mel Gorman , Mike Kravetz , Florian Fainelli , David Hildenbrand , Oscar Salvador , Michal Hocko , Joonsoo Kim , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Doug Berger Subject: [PATCH v2 9/9] mm/page_alloc: allow base for movablecore Date: Wed, 28 Sep 2022 15:33:01 -0700 Message-Id: <20220928223301.375229-10-opendmb@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220928223301.375229-1-opendmb@gmail.com> References: <20220928223301.375229-1-opendmb@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" A Designated Movable Block can be created by including the base address of the block when specifying a movablecore range on the kernel command line. Signed-off-by: Doug Berger --- .../admin-guide/kernel-parameters.txt | 14 ++++++- mm/page_alloc.c | 38 ++++++++++++++++--- 2 files changed, 45 insertions(+), 7 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentatio= n/admin-guide/kernel-parameters.txt index 426fa892d311..8141fac7c7cb 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -3312,7 +3312,7 @@ reporting absolute coordinates, such as tablets =20 movablecore=3D [KNL,X86,IA-64,PPC] - Format: nn[KMGTPE] | nn% + Format: nn[KMGTPE] | nn[KMGTPE]@ss[KMGTPE] | nn% This parameter is the complement to kernelcore=3D, it specifies the amount of memory used for migratable allocations. If both kernelcore and movablecore is @@ -3322,6 +3322,18 @@ that the amount of memory usable for all allocations is not too small. =20 + If @ss[KMGTPE] is included, memory within the region + from ss to ss+nn will be designated as a movable block + and included in ZONE_MOVABLE. Designated Movable Blocks + must be aligned to pageblock_order. Designated Movable + Blocks take priority over values of kernelcore=3D and are + considered part of any memory specified by more general + movablecore=3D values. + Multiple Designated Movable Blocks may be specified, + comma delimited. + Example: + movablecore=3D100M@2G,100M@3G,1G@1024G + movable_node [KNL] Boot-time switch to make hotplugable memory NUMA nodes to be movable. This means that the memory of such nodes will be usable only for movable diff --git a/mm/page_alloc.c b/mm/page_alloc.c index c07111a897c0..a151752c4266 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8370,9 +8370,9 @@ void __init free_area_init(unsigned long *max_zone_pf= n) } =20 static int __init cmdline_parse_core(char *p, unsigned long *core, - unsigned long *percent) + unsigned long *percent, bool movable) { - unsigned long long coremem; + unsigned long long coremem, address; char *endptr; =20 if (!p) @@ -8387,6 +8387,17 @@ static int __init cmdline_parse_core(char *p, unsign= ed long *core, *percent =3D coremem; } else { coremem =3D memparse(p, &p); + if (movable && *p =3D=3D '@') { + address =3D memparse(++p, &p); + if (*p !=3D '\0' || + !memblock_is_region_memory(address, coremem) || + memblock_is_region_reserved(address, coremem)) + return -EINVAL; + memblock_reserve(address, coremem); + return dmb_reserve(address, coremem, NULL); + } else if (*p !=3D '\0') { + return -EINVAL; + } /* Paranoid check that UL is enough for the coremem value */ WARN_ON((coremem >> PAGE_SHIFT) > ULONG_MAX); =20 @@ -8409,17 +8420,32 @@ static int __init cmdline_parse_kernelcore(char *p) } =20 return cmdline_parse_core(p, &required_kernelcore, - &required_kernelcore_percent); + &required_kernelcore_percent, false); } =20 /* * movablecore=3Dsize sets the amount of memory for use for allocations th= at - * can be reclaimed or migrated. + * can be reclaimed or migrated. movablecore=3Dsize@base defines a Designa= ted + * Movable Block. */ static int __init cmdline_parse_movablecore(char *p) { - return cmdline_parse_core(p, &required_movablecore, - &required_movablecore_percent); + int ret =3D -EINVAL; + + while (p) { + char *k =3D strchr(p, ','); + + if (k) + *k++ =3D 0; + + ret =3D cmdline_parse_core(p, &required_movablecore, + &required_movablecore_percent, true); + if (ret) + break; + p =3D k; + } + + return ret; } =20 early_param("kernelcore", cmdline_parse_kernelcore); --=20 2.25.1