From nobody Sun May 5 21:29:48 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1601317321; cv=none; d=zohomail.com; s=zohoarc; b=g2gODZ4JghFUFaaNB3Ym5BOBncWXMuWXpDtR3QObo4jM/7J1HXVVRHAvhjKxCUiWYq94dIGA7vhhIFp+EFDnLe1zzT2AA3EjEBqZl/CZUUbm/m8i/paSOraQir2QVfp+QD/VUVFCQ2D7oQPOvhuevnAWyFPlJlXihMlSWByXOUI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1601317321; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=laXejcIQdTmV7rAwQlkk8zr23wQtPdyau8HQ8vO9W7g=; b=YAyJESLJbTZvveNpql2Z8u7+T5tlsdiMC0bEjZiggqJzPb3uXK372VxgBa/nQyhu2xm8oTeowUOw/9PvQztJ6qaqo5PgVcIARU+PKvxCkD05NsFyfe8rH8SzLmxdhWKfTwt6HNEH17XVNL4GAu3WzjdN2A4xD62kPQKbMWq3AmQ= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1601317321083927.392126033476; Mon, 28 Sep 2020 11:22:01 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kMxmF-0007zw-MR; Mon, 28 Sep 2020 18:21:31 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kMxmE-0007zr-2W for xen-devel@lists.xenproject.org; Mon, 28 Sep 2020 18:21:30 +0000 Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124]) by us1-rack-iad1.inumbo.com (Halon) with ESMTP id a6a4049e-48e9-44a1-833c-09a3daef6f00; Mon, 28 Sep 2020 18:21:27 +0000 (UTC) Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-339-2QVXUUZyO6-SQFPSf2GWaw-1; Mon, 28 Sep 2020 14:21:25 -0400 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id CE60614996; Mon, 28 Sep 2020 18:21:22 +0000 (UTC) Received: from t480s.redhat.com (ovpn-112-106.ams2.redhat.com [10.36.112.106]) by smtp.corp.redhat.com (Postfix) with ESMTP id E4BFA27CC4; Mon, 28 Sep 2020 18:21:19 +0000 (UTC) X-Inumbo-ID: a6a4049e-48e9-44a1-833c-09a3daef6f00 Dkim-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1601317287; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=laXejcIQdTmV7rAwQlkk8zr23wQtPdyau8HQ8vO9W7g=; b=D9VtSUdHj76pTnBf/ihAEMOO81zkY5BbSDD541grvZg9UtFwUqhYHElvDy2xCsiuUuFWRh AnbiCU+Bu7eYQSLt9KFUhp1ziIwCICyAeHDBVoDeQe0ApOSWqcVnLw9lhHxyLWQBtySXIR DLV58wvzSa56WPbna1rUZKgZsrRzMIw= X-MC-Unique: 2QVXUUZyO6-SQFPSf2GWaw-1 From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-hyperv@vger.kernel.org, xen-devel@lists.xenproject.org, linux-acpi@vger.kernel.org, Andrew Morton , David Hildenbrand , Alexander Duyck , Vlastimil Babka , Oscar Salvador , Mel Gorman , Michal Hocko , Dave Hansen , Wei Yang , Mike Rapoport Subject: [PATCH v1 1/5] mm/page_alloc: convert "report" flag of __free_one_page() to a proper flag Date: Mon, 28 Sep 2020 20:21:06 +0200 Message-Id: <20200928182110.7050-2-david@redhat.com> In-Reply-To: <20200928182110.7050-1-david@redhat.com> References: <20200928182110.7050-1-david@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) Content-Type: text/plain; charset="utf-8" Let's prepare for additional flags and avoid long parameter lists of bools. Follow-up patches will also make use of the flags in __free_pages_ok(), however, I wasn't able to come up with a better name for the type - should be good enough for internal purposes. Reviewed-by: Alexander Duyck Reviewed-by: Vlastimil Babka Reviewed-by: Oscar Salvador Cc: Andrew Morton Cc: Alexander Duyck Cc: Mel Gorman Cc: Michal Hocko Cc: Dave Hansen Cc: Vlastimil Babka Cc: Wei Yang Cc: Oscar Salvador Cc: Mike Rapoport Signed-off-by: David Hildenbrand Acked-by: Michal Hocko Reviewed-by: Pankaj Gupta Reviewed-by: Wei Yang --- mm/page_alloc.c | 28 ++++++++++++++++++++-------- 1 file changed, 20 insertions(+), 8 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index df90e3654f97..daab90e960fe 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -77,6 +77,18 @@ #include "shuffle.h" #include "page_reporting.h" =20 +/* Free One Page flags: for internal, non-pcp variants of free_pages(). */ +typedef int __bitwise fop_t; + +/* No special request */ +#define FOP_NONE ((__force fop_t)0) + +/* + * Skip free page reporting notification for the (possibly merged) page. (= will + * *not* mark the page reported, only skip the notification). + */ +#define FOP_SKIP_REPORT_NOTIFY ((__force fop_t)BIT(0)) + /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */ static DEFINE_MUTEX(pcp_batch_high_lock); #define MIN_PERCPU_PAGELIST_FRACTION (8) @@ -948,10 +960,9 @@ buddy_merge_likely(unsigned long pfn, unsigned long bu= ddy_pfn, * -- nyc */ =20 -static inline void __free_one_page(struct page *page, - unsigned long pfn, - struct zone *zone, unsigned int order, - int migratetype, bool report) +static inline void __free_one_page(struct page *page, unsigned long pfn, + struct zone *zone, unsigned int order, + int migratetype, fop_t fop_flags) { struct capture_control *capc =3D task_capc(zone); unsigned long buddy_pfn; @@ -1038,7 +1049,7 @@ static inline void __free_one_page(struct page *page, add_to_free_list(page, zone, order, migratetype); =20 /* Notify page reporting subsystem of freed page */ - if (report) + if (!(fop_flags & FOP_SKIP_REPORT_NOTIFY)) page_reporting_notify_free(order); } =20 @@ -1379,7 +1390,7 @@ static void free_pcppages_bulk(struct zone *zone, int= count, if (unlikely(isolated_pageblocks)) mt =3D get_pageblock_migratetype(page); =20 - __free_one_page(page, page_to_pfn(page), zone, 0, mt, true); + __free_one_page(page, page_to_pfn(page), zone, 0, mt, FOP_NONE); trace_mm_page_pcpu_drain(page, 0, mt); } spin_unlock(&zone->lock); @@ -1395,7 +1406,7 @@ static void free_one_page(struct zone *zone, is_migrate_isolate(migratetype))) { migratetype =3D get_pfnblock_migratetype(page, pfn); } - __free_one_page(page, pfn, zone, order, migratetype, true); + __free_one_page(page, pfn, zone, order, migratetype, FOP_NONE); spin_unlock(&zone->lock); } =20 @@ -3288,7 +3299,8 @@ void __putback_isolated_page(struct page *page, unsig= ned int order, int mt) lockdep_assert_held(&zone->lock); =20 /* Return isolated page to tail of freelist. */ - __free_one_page(page, page_to_pfn(page), zone, order, mt, false); + __free_one_page(page, page_to_pfn(page), zone, order, mt, + FOP_SKIP_REPORT_NOTIFY); } =20 /* --=20 2.26.2 From nobody Sun May 5 21:29:48 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1601317321; cv=none; d=zohomail.com; s=zohoarc; b=Y4qVy31C3yy6QJx7BO8/jotV7xnrvrTlAp3XgF/2CNbzpFN3gJx1h8BiyVYs8M5DKidx8IVgYlW6uI/LunWVc90uMDjAJsqL6aKUgGSy3sDlSrHd1dHXgI5cMV2Fe8q0q8I9CRwyGZYqeLa+G7GjU8Ua0FnbouQNee4DAYQwY+M= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1601317321; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=TNhyUHqO1xaNUWjaJC5gALnSiwlS9rj3Gwt2eIRdkhA=; b=cLwVN9FlP/WoTrCmz3yyWpQE2ZD01vqUEejv3dJhVb0/tj7tHuYQOgiWCGsEeMh/45Tl3/w4ShPxuiB7/9we+447F+BIFC54GRpVni8VICxMhqow6TQ3sVW3OeFHB2yFMH6tOUM2saziHUh5z3R1gLai9R/wZxi7G13pqGA2alE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1601317321105568.7608951174992; Mon, 28 Sep 2020 11:22:01 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kMxmL-00080S-7J; Mon, 28 Sep 2020 18:21:37 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kMxmK-00080F-9L for xen-devel@lists.xenproject.org; Mon, 28 Sep 2020 18:21:36 +0000 Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTP id ebb6a91c-2017-4789-9528-5323f549f466; Mon, 28 Sep 2020 18:21:34 +0000 (UTC) Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-68-_1TlHN4uMYm91E83m_d4sA-1; Mon, 28 Sep 2020 14:21:29 -0400 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 881B914996; Mon, 28 Sep 2020 18:21:27 +0000 (UTC) Received: from t480s.redhat.com (ovpn-112-106.ams2.redhat.com [10.36.112.106]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2961C27CC7; Mon, 28 Sep 2020 18:21:23 +0000 (UTC) X-Inumbo-ID: ebb6a91c-2017-4789-9528-5323f549f466 Dkim-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1601317294; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=TNhyUHqO1xaNUWjaJC5gALnSiwlS9rj3Gwt2eIRdkhA=; b=fJ4UM4VBGyv8wPoIigSSnr9riD+7cI2HrmN2gysc1a6x2G2i1tuaJFW2qVmXYm4g/AszBP r1tgiyIeF++PE/8auEMP2r0kedzgDs3rjcxP5/hNlUrjoYh9lDCWRh9P0gK4/CEP5jpHh+ Awa2mCeTg9RIBP9DSGflf0pmrLc4Cro= X-MC-Unique: _1TlHN4uMYm91E83m_d4sA-1 From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-hyperv@vger.kernel.org, xen-devel@lists.xenproject.org, linux-acpi@vger.kernel.org, Andrew Morton , David Hildenbrand , Alexander Duyck , Oscar Salvador , Mel Gorman , Michal Hocko , Dave Hansen , Vlastimil Babka , Wei Yang , Mike Rapoport , Scott Cheloha , Michael Ellerman Subject: [PATCH v1 2/5] mm/page_alloc: place pages to tail in __putback_isolated_page() Date: Mon, 28 Sep 2020 20:21:07 +0200 Message-Id: <20200928182110.7050-3-david@redhat.com> In-Reply-To: <20200928182110.7050-1-david@redhat.com> References: <20200928182110.7050-1-david@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) Content-Type: text/plain; charset="utf-8" __putback_isolated_page() already documents that pages will be placed to the tail of the freelist - this is, however, not the case for "order >=3D MAX_ORDER - 2" (see buddy_merge_likely()) - which should be the case for all existing users. This change affects two users: - free page reporting - page isolation, when undoing the isolation (including memory onlining). This behavior is desireable for pages that haven't really been touched lately, so exactly the two users that don't actually read/write page content, but rather move untouched pages. The new behavior is especially desirable for memory onlining, where we allow allocation of newly onlined pages via undo_isolate_page_range() in online_pages(). Right now, we always place them to the head of the free list, resulting in undesireable behavior: Assume we add individual memory chunks via add_memory() and online them right away to the NORMAL zone. We create a dependency chain of unmovable allocations e.g., via the memmap. The memmap of the next chunk will be placed onto previous chunks - if the last block cannot get offlined+removed, all dependent ones cannot get offlined+removed. While this can already be observed with individual DIMMs, it's more of an issue for virtio-mem (and I suspect also ppc DLPAR). Document that this should only be used for optimizations, and no code should realy on this for correction (if the order of freepage lists ever changes). We won't care about page shuffling: memory onlining already properly shuffles after onlining. free page reporting doesn't care about physically contiguous ranges, and there are already cases where page isolation will simply move (physically close) free pages to (currently) the head of the freelists via move_freepages_block() instead of shuffling. If this becomes ever relevant, we should shuffle the whole zone when undoing isolation of larger ranges, and after free_contig_range(). Reviewed-by: Alexander Duyck Reviewed-by: Oscar Salvador Cc: Andrew Morton Cc: Alexander Duyck Cc: Mel Gorman Cc: Michal Hocko Cc: Dave Hansen Cc: Vlastimil Babka Cc: Wei Yang Cc: Oscar Salvador Cc: Mike Rapoport Cc: Scott Cheloha Cc: Michael Ellerman Signed-off-by: David Hildenbrand Acked-by: Michal Hocko Reviewed-by: Pankaj Gupta Reviewed-by: Wei Yang --- mm/page_alloc.c | 18 ++++++++++++++++-- 1 file changed, 16 insertions(+), 2 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index daab90e960fe..9e3ed4a6f69a 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -89,6 +89,18 @@ typedef int __bitwise fop_t; */ #define FOP_SKIP_REPORT_NOTIFY ((__force fop_t)BIT(0)) =20 +/* + * Place the (possibly merged) page to the tail of the freelist. Will igno= re + * page shuffling (relevant code - e.g., memory onlining - is expected to + * shuffle the whole zone). + * + * Note: No code should rely onto this flag for correctness - it's purely + * to allow for optimizations when handing back either fresh pages + * (memory onlining) or untouched pages (page isolation, free page + * reporting). + */ +#define FOP_TO_TAIL ((__force fop_t)BIT(1)) + /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */ static DEFINE_MUTEX(pcp_batch_high_lock); #define MIN_PERCPU_PAGELIST_FRACTION (8) @@ -1038,7 +1050,9 @@ static inline void __free_one_page(struct page *page,= unsigned long pfn, done_merging: set_page_order(page, order); =20 - if (is_shuffle_order(order)) + if (fop_flags & FOP_TO_TAIL) + to_tail =3D true; + else if (is_shuffle_order(order)) to_tail =3D shuffle_pick_tail(); else to_tail =3D buddy_merge_likely(pfn, buddy_pfn, page, order); @@ -3300,7 +3314,7 @@ void __putback_isolated_page(struct page *page, unsig= ned int order, int mt) =20 /* Return isolated page to tail of freelist. */ __free_one_page(page, page_to_pfn(page), zone, order, mt, - FOP_SKIP_REPORT_NOTIFY); + FOP_SKIP_REPORT_NOTIFY | FOP_TO_TAIL); } =20 /* --=20 2.26.2 From nobody Sun May 5 21:29:48 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1601317321; cv=none; d=zohomail.com; s=zohoarc; b=TlOBJy8DzBvgVjI3xVjRQqkwe4ckl32RMaYirnaL6rpbPsVD0cGoyrJuaxvNUzrf4VlVBoF/9iHrkAK8EBEoWHCVRXf7AhhifTkkmfHQfmDrX2BSRhPkUvVoR+fg3n+zrdq66KAKoKl1NGGg2ICmDRAkZrdAnYxasWPWCmNBWlo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1601317321; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=6dOV1555OsoEI5HRuB+h7DxLNyiNQ9P4qHlU7ldsU34=; b=mh5K1gw/OAsTSdgEdkJgDESi8LQ10d82EZlVwHWbE3aQkdp/394vZPH22DmYW1RBL4+NpguPfHsrmDzxWC5mRjc6PoPsKZIoPKrqJv/EnxiAeNvfxEF2M5dSbFXt5mimure1aecgkJ71BwNHAQkRmZLv8jwVn1MqyiTWVu9BzIQ= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 160131732108165.28230617679856; Mon, 28 Sep 2020 11:22:01 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kMxmQ-00081Q-LP; Mon, 28 Sep 2020 18:21:42 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kMxmP-00080F-5Z for xen-devel@lists.xenproject.org; Mon, 28 Sep 2020 18:21:41 +0000 Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTP id e4d6f930-e3e5-4313-ab0b-936411fc9f5d; Mon, 28 Sep 2020 18:21:38 +0000 (UTC) Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-407-5ki3VTmDNWmGKQdsRmvziA-1; Mon, 28 Sep 2020 14:21:33 -0400 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 17765186840C; Mon, 28 Sep 2020 18:21:31 +0000 (UTC) Received: from t480s.redhat.com (ovpn-112-106.ams2.redhat.com [10.36.112.106]) by smtp.corp.redhat.com (Postfix) with ESMTP id D602A27CC7; Mon, 28 Sep 2020 18:21:27 +0000 (UTC) X-Inumbo-ID: e4d6f930-e3e5-4313-ab0b-936411fc9f5d Dkim-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1601317298; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6dOV1555OsoEI5HRuB+h7DxLNyiNQ9P4qHlU7ldsU34=; b=gQms5gUoxMxakdUnVJdMeiedzNcMLYwfygFoPgVQlKebw8XEsD9rZhrrjgVidAmER+/+6p cCI03t5V1JCZt86toZVILBWiHWRYiMNHJeqi1d6CvYRXDHwMzNFVeJusV9HVr6jmx1Wu/j i8yhSVJlyexj5epg3rNG+DTdrZjC1V8= X-MC-Unique: 5ki3VTmDNWmGKQdsRmvziA-1 From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-hyperv@vger.kernel.org, xen-devel@lists.xenproject.org, linux-acpi@vger.kernel.org, Andrew Morton , David Hildenbrand , Oscar Salvador , Alexander Duyck , Mel Gorman , Michal Hocko , Dave Hansen , Vlastimil Babka , Wei Yang , Mike Rapoport , Scott Cheloha , Michael Ellerman Subject: [PATCH v1 3/5] mm/page_alloc: always move pages to the tail of the freelist in unset_migratetype_isolate() Date: Mon, 28 Sep 2020 20:21:08 +0200 Message-Id: <20200928182110.7050-4-david@redhat.com> In-Reply-To: <20200928182110.7050-1-david@redhat.com> References: <20200928182110.7050-1-david@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) Content-Type: text/plain; charset="utf-8" Page isolation doesn't actually touch the pages, it simply isolates pageblocks and moves all free pages to the MIGRATE_ISOLATE freelist. We already place pages to the tail of the freelists when undoing isolation via __putback_isolated_page(), let's do it in any case (e.g., if order <=3D pageblock_order) and document the behavior. Add a "to_tail" parameter to move_freepages_block() but introduce a a new move_to_free_list_tail() - similar to add_to_free_list_tail(). This change results in all pages getting onlined via online_pages() to be placed to the tail of the freelist. Reviewed-by: Oscar Salvador Cc: Andrew Morton Cc: Alexander Duyck Cc: Mel Gorman Cc: Michal Hocko Cc: Dave Hansen Cc: Vlastimil Babka Cc: Wei Yang Cc: Oscar Salvador Cc: Mike Rapoport Cc: Scott Cheloha Cc: Michael Ellerman Signed-off-by: David Hildenbrand Acked-by: Pankaj Gupta Reviewed-by: Wei Yang --- include/linux/page-isolation.h | 4 ++-- mm/page_alloc.c | 35 +++++++++++++++++++++++----------- mm/page_isolation.c | 12 +++++++++--- 3 files changed, 35 insertions(+), 16 deletions(-) diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h index 572458016331..3eca9b3c5305 100644 --- a/include/linux/page-isolation.h +++ b/include/linux/page-isolation.h @@ -36,8 +36,8 @@ static inline bool is_migrate_isolate(int migratetype) struct page *has_unmovable_pages(struct zone *zone, struct page *page, int migratetype, int flags); void set_pageblock_migratetype(struct page *page, int migratetype); -int move_freepages_block(struct zone *zone, struct page *page, - int migratetype, int *num_movable); +int move_freepages_block(struct zone *zone, struct page *page, int migrate= type, + bool to_tail, int *num_movable); =20 /* * Changes migrate type in [start_pfn, end_pfn) to be MIGRATE_ISOLATE. diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 9e3ed4a6f69a..d5a5f528b8ca 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -905,6 +905,15 @@ static inline void move_to_free_list(struct page *page= , struct zone *zone, list_move(&page->lru, &area->free_list[migratetype]); } =20 +/* Used for pages which are on another list */ +static inline void move_to_free_list_tail(struct page *page, struct zone *= zone, + unsigned int order, int migratetype) +{ + struct free_area *area =3D &zone->free_area[order]; + + list_move_tail(&page->lru, &area->free_list[migratetype]); +} + static inline void del_page_from_free_list(struct page *page, struct zone = *zone, unsigned int order) { @@ -2338,9 +2347,9 @@ static inline struct page *__rmqueue_cma_fallback(str= uct zone *zone, * Note that start_page and end_pages are not aligned on a pageblock * boundary. If alignment is required, use move_freepages_block() */ -static int move_freepages(struct zone *zone, - struct page *start_page, struct page *end_page, - int migratetype, int *num_movable) +static int move_freepages(struct zone *zone, struct page *start_page, + struct page *end_page, int migratetype, + bool to_tail, int *num_movable) { struct page *page; unsigned int order; @@ -2371,7 +2380,10 @@ static int move_freepages(struct zone *zone, VM_BUG_ON_PAGE(page_zone(page) !=3D zone, page); =20 order =3D page_order(page); - move_to_free_list(page, zone, order, migratetype); + if (to_tail) + move_to_free_list_tail(page, zone, order, migratetype); + else + move_to_free_list(page, zone, order, migratetype); page +=3D 1 << order; pages_moved +=3D 1 << order; } @@ -2379,8 +2391,8 @@ static int move_freepages(struct zone *zone, return pages_moved; } =20 -int move_freepages_block(struct zone *zone, struct page *page, - int migratetype, int *num_movable) +int move_freepages_block(struct zone *zone, struct page *page, int migrate= type, + bool to_tail, int *num_movable) { unsigned long start_pfn, end_pfn; struct page *start_page, *end_page; @@ -2401,7 +2413,7 @@ int move_freepages_block(struct zone *zone, struct pa= ge *page, return 0; =20 return move_freepages(zone, start_page, end_page, migratetype, - num_movable); + to_tail, num_movable); } =20 static void change_pageblock_range(struct page *pageblock_page, @@ -2526,8 +2538,8 @@ static void steal_suitable_fallback(struct zone *zone= , struct page *page, if (!whole_block) goto single_page; =20 - free_pages =3D move_freepages_block(zone, page, start_type, - &movable_pages); + free_pages =3D move_freepages_block(zone, page, start_type, false, + &movable_pages); /* * Determine how many pages are compatible with our allocation. * For movable allocation, it's the number of movable pages which @@ -2635,7 +2647,8 @@ static void reserve_highatomic_pageblock(struct page = *page, struct zone *zone, && !is_migrate_cma(mt)) { zone->nr_reserved_highatomic +=3D pageblock_nr_pages; set_pageblock_migratetype(page, MIGRATE_HIGHATOMIC); - move_freepages_block(zone, page, MIGRATE_HIGHATOMIC, NULL); + move_freepages_block(zone, page, MIGRATE_HIGHATOMIC, false, + NULL); } =20 out_unlock: @@ -2711,7 +2724,7 @@ static bool unreserve_highatomic_pageblock(const stru= ct alloc_context *ac, */ set_pageblock_migratetype(page, ac->migratetype); ret =3D move_freepages_block(zone, page, ac->migratetype, - NULL); + false, NULL); if (ret) { spin_unlock_irqrestore(&zone->lock, flags); return ret; diff --git a/mm/page_isolation.c b/mm/page_isolation.c index abfe26ad59fd..de44e1329706 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -45,7 +45,7 @@ static int set_migratetype_isolate(struct page *page, int= migratetype, int isol_ set_pageblock_migratetype(page, MIGRATE_ISOLATE); zone->nr_isolate_pageblock++; nr_pages =3D move_freepages_block(zone, page, MIGRATE_ISOLATE, - NULL); + false, NULL); =20 __mod_zone_freepage_state(zone, -nr_pages, mt); spin_unlock_irqrestore(&zone->lock, flags); @@ -83,7 +83,7 @@ static void unset_migratetype_isolate(struct page *page, = unsigned migratetype) * Because freepage with more than pageblock_order on isolated * pageblock is restricted to merge due to freepage counting problem, * it is possible that there is free buddy page. - * move_freepages_block() doesn't care of merge so we need other + * move_freepages_block() don't care about merging, so we need another * approach in order to merge them. Isolation and free will make * these pages to be merged. */ @@ -106,9 +106,15 @@ static void unset_migratetype_isolate(struct page *pag= e, unsigned migratetype) * If we isolate freepage with more than pageblock_order, there * should be no freepage in the range, so we could avoid costly * pageblock scanning for freepage moving. + * + * We didn't actually touch any of the isolated pages, so place them + * to the tail of the freelist. This is an optimization for memory + * onlining - just onlined memory won't immediately be considered for + * allocation. */ if (!isolated_page) { - nr_pages =3D move_freepages_block(zone, page, migratetype, NULL); + nr_pages =3D move_freepages_block(zone, page, migratetype, true, + NULL); __mod_zone_freepage_state(zone, nr_pages, migratetype); } set_pageblock_migratetype(page, migratetype); --=20 2.26.2 From nobody Sun May 5 21:29:48 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1601317323; cv=none; d=zohomail.com; s=zohoarc; b=LH/JVd1VN+Fpki+/Tp4hixukGPpW3Vu3x9ZoYzi9ohy3hX5Zj3kz/ww816KJIKYHHF0tqPqRwcUTWKysFUDnc3gAoXrO1GAJidGU+GC1ajZV5+EjjYUgKSQrY2WS30F32WD2cL+twqS+/1i+OGEEhdRySlWWl79RbEBFgsXp/CU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1601317323; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=a/QBjOJOnjKSQGTcFwa/HDPL1lh7ZnPAc8cUq7Eyqoc=; b=c+xzUDd3CMXZWQSeW70ZN+Y4qHaYalslUUmj17sVSYI/IHTb4JExZV5Nh1mHkgcT6D0Wkp2tRZwgMRo+AznXsQ9vdkSUiuf9EcjpeUiZuNozQ1QWnXD1J5BERTIkSoJA5/PYvAOo6ZVDFUg09iXAb7A1ZGJE88f6NSFraRN+/Ak= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1601317323246997.217006806744; Mon, 28 Sep 2020 11:22:03 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kMxmV-00083W-V4; Mon, 28 Sep 2020 18:21:47 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kMxmV-00083P-DY for xen-devel@lists.xenproject.org; Mon, 28 Sep 2020 18:21:47 +0000 Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTP id 37586e99-c903-4b62-80bd-624e1ea45902; Mon, 28 Sep 2020 18:21:46 +0000 (UTC) Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-185-zHBegtK0MmqPunN9wGjOLg-1; Mon, 28 Sep 2020 14:21:42 -0400 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 1DDAF80EF8B; Mon, 28 Sep 2020 18:21:39 +0000 (UTC) Received: from t480s.redhat.com (ovpn-112-106.ams2.redhat.com [10.36.112.106]) by smtp.corp.redhat.com (Postfix) with ESMTP id 66D2C27CC4; Mon, 28 Sep 2020 18:21:31 +0000 (UTC) X-Inumbo-ID: 37586e99-c903-4b62-80bd-624e1ea45902 Dkim-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1601317306; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=a/QBjOJOnjKSQGTcFwa/HDPL1lh7ZnPAc8cUq7Eyqoc=; b=KnuJARy7hjPXw4IBXj3cGCV3QdxngmBHsIrKz5BUT04akeSw7OdQyGvvK4dR/n60U+xsUt H3Fbv0jsGWkpRAEGYvivZpqN0JFo/WxuKgwa5LXRYkffItkRJ3pWlSgAIVjKs626hsH8xz EwuUetcvWXc7eNOc4fau1xaq68s5PYc= X-MC-Unique: zHBegtK0MmqPunN9wGjOLg-1 From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-hyperv@vger.kernel.org, xen-devel@lists.xenproject.org, linux-acpi@vger.kernel.org, Andrew Morton , David Hildenbrand , Vlastimil Babka , Oscar Salvador , Alexander Duyck , Mel Gorman , Michal Hocko , Dave Hansen , Wei Yang , Mike Rapoport , "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu Subject: [PATCH v1 4/5] mm/page_alloc: place pages to tail in __free_pages_core() Date: Mon, 28 Sep 2020 20:21:09 +0200 Message-Id: <20200928182110.7050-5-david@redhat.com> In-Reply-To: <20200928182110.7050-1-david@redhat.com> References: <20200928182110.7050-1-david@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) Content-Type: text/plain; charset="utf-8" __free_pages_core() is used when exposing fresh memory to the buddy during system boot and when onlining memory in generic_online_page(). generic_online_page() is used in two cases: 1. Direct memory onlining in online_pages(). 2. Deferred memory onlining in memory-ballooning-like mechanisms (HyperV balloon and virtio-mem), when parts of a section are kept fake-offline to be fake-onlined later on. In 1, we already place pages to the tail of the freelist. Pages will be freed to MIGRATE_ISOLATE lists first and moved to the tail of the freelists via undo_isolate_page_range(). In 2, we currently don't implement a proper rule. In case of virtio-mem, where we currently always online MAX_ORDER - 1 pages, the pages will be placed to the HEAD of the freelist - undesireable. While the hyper-v balloon calls generic_online_page() with single pages, usually it will call it on successive single pages in a larger block. The pages are fresh, so place them to the tail of the freelists and avoid the PCP. In __free_pages_core(), remove the now superflouos call to set_page_refcounted() and add a comment regarding page initialization and the refcount. Note: In 2. we currently don't shuffle. If ever relevant (page shuffling is usually of limited use in virtualized environments), we might want to shuffle after a sequence of generic_online_page() calls in the relevant callers. Reviewed-by: Vlastimil Babka Reviewed-by: Oscar Salvador Cc: Andrew Morton Cc: Alexander Duyck Cc: Mel Gorman Cc: Michal Hocko Cc: Dave Hansen Cc: Vlastimil Babka Cc: Wei Yang Cc: Oscar Salvador Cc: Mike Rapoport Cc: "K. Y. Srinivasan" Cc: Haiyang Zhang Cc: Stephen Hemminger Cc: Wei Liu Signed-off-by: David Hildenbrand Acked-by: Michal Hocko Acked-by: Pankaj Gupta Reviewed-by: Wei Yang --- mm/page_alloc.c | 37 ++++++++++++++++++++++++------------- 1 file changed, 24 insertions(+), 13 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d5a5f528b8ca..8a2134fe9947 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -270,7 +270,8 @@ bool pm_suspended_storage(void) unsigned int pageblock_order __read_mostly; #endif =20 -static void __free_pages_ok(struct page *page, unsigned int order); +static void __free_pages_ok(struct page *page, unsigned int order, + fop_t fop_flags); =20 /* * results with 256, 32 in the lowmem_reserve sysctl: @@ -682,7 +683,7 @@ static void bad_page(struct page *page, const char *rea= son) void free_compound_page(struct page *page) { mem_cgroup_uncharge(page); - __free_pages_ok(page, compound_order(page)); + __free_pages_ok(page, compound_order(page), FOP_NONE); } =20 void prep_compound_page(struct page *page, unsigned int order) @@ -1419,17 +1420,15 @@ static void free_pcppages_bulk(struct zone *zone, i= nt count, spin_unlock(&zone->lock); } =20 -static void free_one_page(struct zone *zone, - struct page *page, unsigned long pfn, - unsigned int order, - int migratetype) +static void free_one_page(struct zone *zone, struct page *page, unsigned l= ong pfn, + unsigned int order, int migratetype, fop_t fop_flags) { spin_lock(&zone->lock); if (unlikely(has_isolate_pageblock(zone) || is_migrate_isolate(migratetype))) { migratetype =3D get_pfnblock_migratetype(page, pfn); } - __free_one_page(page, pfn, zone, order, migratetype, FOP_NONE); + __free_one_page(page, pfn, zone, order, migratetype, fop_flags); spin_unlock(&zone->lock); } =20 @@ -1507,7 +1506,8 @@ void __meminit reserve_bootmem_region(phys_addr_t sta= rt, phys_addr_t end) } } =20 -static void __free_pages_ok(struct page *page, unsigned int order) +static void __free_pages_ok(struct page *page, unsigned int order, + fop_t fop_flags) { unsigned long flags; int migratetype; @@ -1519,7 +1519,8 @@ static void __free_pages_ok(struct page *page, unsign= ed int order) migratetype =3D get_pfnblock_migratetype(page, pfn); local_irq_save(flags); __count_vm_events(PGFREE, 1 << order); - free_one_page(page_zone(page), page, pfn, order, migratetype); + free_one_page(page_zone(page), page, pfn, order, migratetype, + fop_flags); local_irq_restore(flags); } =20 @@ -1529,6 +1530,11 @@ void __free_pages_core(struct page *page, unsigned i= nt order) struct page *p =3D page; unsigned int loop; =20 + /* + * When initializing the memmap, init_single_page() sets the refcount + * of all pages to 1 ("allocated"/"not free"). We have to set the + * refcount of all involved pages to 0. + */ prefetchw(p); for (loop =3D 0; loop < (nr_pages - 1); loop++, p++) { prefetchw(p + 1); @@ -1539,8 +1545,12 @@ void __free_pages_core(struct page *page, unsigned i= nt order) set_page_count(p, 0); =20 atomic_long_add(nr_pages, &page_zone(page)->managed_pages); - set_page_refcounted(page); - __free_pages(page, order); + + /* + * Bypass PCP and place fresh pages right to the tail, primarily + * relevant for memory onlining. + */ + __free_pages_ok(page, order, FOP_TO_TAIL); } =20 #ifdef CONFIG_NEED_MULTIPLE_NODES @@ -3171,7 +3181,8 @@ static void free_unref_page_commit(struct page *page,= unsigned long pfn) */ if (migratetype >=3D MIGRATE_PCPTYPES) { if (unlikely(is_migrate_isolate(migratetype))) { - free_one_page(zone, page, pfn, 0, migratetype); + free_one_page(zone, page, pfn, 0, migratetype, + FOP_NONE); return; } migratetype =3D MIGRATE_MOVABLE; @@ -5063,7 +5074,7 @@ static inline void free_the_page(struct page *page, u= nsigned int order) if (order =3D=3D 0) /* Via pcp? */ free_unref_page(page); else - __free_pages_ok(page, order); + __free_pages_ok(page, order, FOP_NONE); } =20 void __free_pages(struct page *page, unsigned int order) --=20 2.26.2 From nobody Sun May 5 21:29:48 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1601317333; cv=none; d=zohomail.com; s=zohoarc; b=V078RGS1TPR3RKU7vy1J0VrZDS/84m8gdGZIa8Haeo+TmdzGsuIIHX/rHQ4Lpf0KBNEVwFkv2quILqlIEF9Iy53OWXAi2oJbGyRwTgQDIF5ORZIxvJUS9CSWnRpB6N72hv0Noq0wUEcIje8PmHFiG2TYUz8rkqIqGABwAMeOKSo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1601317333; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=85B0mZRKs5mkl5E8BDbfnUinjf/rtUnPl3Qk/CJCPXg=; b=Aofk2UxOAu2ldwhPu1TBfIGBRrTAqqsQ8HU8zhw8p04Gbs5LwhEPPgQ57E8QT8xvLgfS/hIHpg1ywhUQ6g9xNltjdj6SVEKhCzPnsnLWeL7jB2aKizUD4BnInZOOeAHeDNLZFAIDvXxuZT6osoqB9LyXWn/98G8RDIBYoEnEGtg= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1601317333408156.98437967708173; Mon, 28 Sep 2020 11:22:13 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kMxmf-000870-8B; Mon, 28 Sep 2020 18:21:57 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kMxme-00086V-2H for xen-devel@lists.xenproject.org; Mon, 28 Sep 2020 18:21:56 +0000 Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTP id 5486eb5b-76b8-4789-a710-fa9000a6fd87; Mon, 28 Sep 2020 18:21:55 +0000 (UTC) Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-137-NY6l3ukFNlmJxz48QH7ihg-1; Mon, 28 Sep 2020 14:21:51 -0400 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id F39EC8015AA; Mon, 28 Sep 2020 18:21:48 +0000 (UTC) Received: from t480s.redhat.com (ovpn-112-106.ams2.redhat.com [10.36.112.106]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6B58827CD4; Mon, 28 Sep 2020 18:21:39 +0000 (UTC) X-Inumbo-ID: 5486eb5b-76b8-4789-a710-fa9000a6fd87 Dkim-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1601317315; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=85B0mZRKs5mkl5E8BDbfnUinjf/rtUnPl3Qk/CJCPXg=; b=WYqzVoKXMO6IkcRYCJzkXQK+t3xvqBx/aWKw7ixKoDOMWsiOLa9ZhRII5CLYaBV0HcebzI hsvs9eO5vIL3mKMCn8aKcpJ+ukLx6j/4H5qAO9eryadQjgJ7BJUom/C5NfZz9ESox6V1Uj Bb/hpl7okH5/DlT34ql84kO/JNftqYI= X-MC-Unique: NY6l3ukFNlmJxz48QH7ihg-1 From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-hyperv@vger.kernel.org, xen-devel@lists.xenproject.org, linux-acpi@vger.kernel.org, Andrew Morton , David Hildenbrand , Alexander Duyck , Mel Gorman , Michal Hocko , Dave Hansen , Vlastimil Babka , Wei Yang , Oscar Salvador , Mike Rapoport Subject: [PATCH v1 5/5] mm/memory_hotplug: update comment regarding zone shuffling Date: Mon, 28 Sep 2020 20:21:10 +0200 Message-Id: <20200928182110.7050-6-david@redhat.com> In-Reply-To: <20200928182110.7050-1-david@redhat.com> References: <20200928182110.7050-1-david@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) Content-Type: text/plain; charset="utf-8" As we no longer shuffle via generic_online_page() and when undoing isolation, we can simplify the comment. We now effectively shuffle only once (properly) when onlining new memory. Cc: Andrew Morton Cc: Alexander Duyck Cc: Mel Gorman Cc: Michal Hocko Cc: Dave Hansen Cc: Vlastimil Babka Cc: Wei Yang Cc: Oscar Salvador Cc: Mike Rapoport Signed-off-by: David Hildenbrand Acked-by: Michal Hocko Reviewed-by: Wei Yang --- mm/memory_hotplug.c | 11 ++++------- 1 file changed, 4 insertions(+), 7 deletions(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 9db80ee29caa..c589bd8801bb 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -859,13 +859,10 @@ int __ref online_pages(unsigned long pfn, unsigned lo= ng nr_pages, undo_isolate_page_range(pfn, pfn + nr_pages, MIGRATE_MOVABLE); =20 /* - * When exposing larger, physically contiguous memory areas to the - * buddy, shuffling in the buddy (when freeing onlined pages, putting - * them either to the head or the tail of the freelist) is only helpful - * for maintaining the shuffle, but not for creating the initial - * shuffle. Shuffle the whole zone to make sure the just onlined pages - * are properly distributed across the whole freelist. Make sure to - * shuffle once pageblocks are no longer isolated. + * Freshly onlined pages aren't shuffled (e.g., all pages are placed to + * the tail of the freelist when undoing isolation). Shuffle the whole + * zone to make sure the just onlined pages are properly distributed + * across the whole freelist - to create an initial shuffle. */ shuffle_zone(zone); =20 --=20 2.26.2