From nobody Sun Nov 24 18:05:44 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1717751436; cv=none; d=zohomail.com; s=zohoarc; b=PxUVM6CZcjn8MzEUiA8893GcdKQlv0ygQsZ8z3Suv/x0NiBOKqhiMU/vv3kdelJZbVg8whfjOuOSf/yYA+GbCD/Z63OKT1VyVtn2cNZSzI8F0H9MV5QKXEomS/N7VZ0OWkiAcmCYSz7yW6PIC4sJ1N5xRV1jO+5+RC9wFv4Orpo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1717751436; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=BkvZtA0DZCTst0gmiY3p4tzfSSSAg0RWHnN+gN89loE=; b=DeYdBe+yRJR5/ERJHh1aSZEHlMkpN19z95AOUP3g6W1xxaJR6Az6zwNY7vhVa/QvgejAQxi07VTnGKzeWfNK/+e7PWLCBTPzKpcuz1RLphDPnWWNah8k1WVxpMWj29/K2XaPud0ZFn7NRSEcJXX4/qeWYo67PE3gzVmjmTedC2s= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1717751436304743.156733570078; Fri, 7 Jun 2024 02:10:36 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.736431.1142527 (Exim 4.92) (envelope-from ) id 1sFVc4-00081Y-FC; Fri, 07 Jun 2024 09:10:20 +0000 Received: by outflank-mailman (output) from mailman id 736431.1142527; Fri, 07 Jun 2024 09:10:20 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sFVc4-00081R-C3; Fri, 07 Jun 2024 09:10:20 +0000 Received: by outflank-mailman (input) for mailman id 736431; Fri, 07 Jun 2024 09:10:19 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sFVc3-0006Bc-P9 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 09:10:19 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id c2252ea2-24ad-11ef-90a2-e314d9c70b13; Fri, 07 Jun 2024 11:10:18 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-592-yH10sX6lNoa9rUALrYBFDg-1; Fri, 07 Jun 2024 05:10:01 -0400 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 16217811E81; Fri, 7 Jun 2024 09:10:00 +0000 (UTC) Received: from t14s.fritz.box (unknown [10.39.194.94]) by smtp.corp.redhat.com (Postfix) with ESMTP id CF51437E7; Fri, 7 Jun 2024 09:09:55 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c2252ea2-24ad-11ef-90a2-e314d9c70b13 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1717751417; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BkvZtA0DZCTst0gmiY3p4tzfSSSAg0RWHnN+gN89loE=; b=GKS4Exv49hOzsBLMaPqOha/mm31v0EwICUqWSPgUeIO9FvzjrdpskTrBZeDrcPtsfe1Rbk BSfeVuNbfB0vJgeg6zAA6QLPbzSeXHIKiy4kFQJZobdZEK91aRfP1Q1X7v5r4coAptWyet hRCjqaDSgsmF4ZlhH4nzU2ubmxDkq9Q= X-MC-Unique: yH10sX6lNoa9rUALrYBFDg-1 From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-hyperv@vger.kernel.org, virtualization@lists.linux.dev, xen-devel@lists.xenproject.org, kasan-dev@googlegroups.com, David Hildenbrand , Andrew Morton , Mike Rapoport , Oscar Salvador , "K. Y. Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?UTF-8?q?Eugenio=20P=C3=A9rez?= , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Alexander Potapenko , Marco Elver , Dmitry Vyukov Subject: [PATCH v1 3/3] mm/memory_hotplug: skip adjust_managed_page_count() for PageOffline() pages when offlining Date: Fri, 7 Jun 2024 11:09:38 +0200 Message-ID: <20240607090939.89524-4-david@redhat.com> In-Reply-To: <20240607090939.89524-1-david@redhat.com> References: <20240607090939.89524-1-david@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.1 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1717751437890100001 Content-Type: text/plain; charset="utf-8" We currently have a hack for virtio-mem in place to handle memory offlining with PageOffline pages for which we already adjusted the managed page count. Let's enlighten memory offlining code so we can get rid of that hack, and document the situation. Signed-off-by: David Hildenbrand Acked-by: Oscar Salvador --- drivers/virtio/virtio_mem.c | 11 ++--------- include/linux/memory_hotplug.h | 4 ++-- include/linux/page-flags.h | 8 ++++++-- mm/memory_hotplug.c | 6 +++--- mm/page_alloc.c | 12 ++++++++++-- 5 files changed, 23 insertions(+), 18 deletions(-) diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c index b90df29621c81..b0b8714415783 100644 --- a/drivers/virtio/virtio_mem.c +++ b/drivers/virtio/virtio_mem.c @@ -1269,12 +1269,6 @@ static void virtio_mem_fake_offline_going_offline(un= signed long pfn, struct page *page; unsigned long i; =20 - /* - * Drop our reference to the pages so the memory can get offlined - * and add the unplugged pages to the managed page counters (so - * offlining code can correctly subtract them again). - */ - adjust_managed_page_count(pfn_to_page(pfn), nr_pages); /* Drop our reference to the pages so the memory can get offlined. */ for (i =3D 0; i < nr_pages; i++) { page =3D pfn_to_page(pfn + i); @@ -1293,10 +1287,9 @@ static void virtio_mem_fake_offline_cancel_offline(u= nsigned long pfn, unsigned long i; =20 /* - * Get the reference we dropped when going offline and subtract the - * unplugged pages from the managed page counters. + * Get the reference again that we dropped via page_ref_dec_and_test() + * when going offline. */ - adjust_managed_page_count(pfn_to_page(pfn), -nr_pages); for (i =3D 0; i < nr_pages; i++) page_ref_inc(pfn_to_page(pfn + i)); } diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index 7a9ff464608d7..ebe876930e782 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -175,8 +175,8 @@ extern int mhp_init_memmap_on_memory(unsigned long pfn,= unsigned long nr_pages, extern void mhp_deinit_memmap_on_memory(unsigned long pfn, unsigned long n= r_pages); extern int online_pages(unsigned long pfn, unsigned long nr_pages, struct zone *zone, struct memory_group *group); -extern void __offline_isolated_pages(unsigned long start_pfn, - unsigned long end_pfn); +extern unsigned long __offline_isolated_pages(unsigned long start_pfn, + unsigned long end_pfn); =20 typedef void (*online_page_callback_t)(struct page *page, unsigned int ord= er); =20 diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index e0362ce7fc109..0876aca0833e7 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -1024,11 +1024,15 @@ PAGE_TYPE_OPS(Buddy, buddy, buddy) * putting them back to the buddy, it can do so via the memory notifier by * decrementing the reference count in MEM_GOING_OFFLINE and incrementing = the * reference count in MEM_CANCEL_OFFLINE. When offlining, the PageOffline() - * pages (now with a reference count of zero) are treated like free pages, - * allowing the containing memory block to get offlined. A driver that + * pages (now with a reference count of zero) are treated like free (unman= aged) + * pages, allowing the containing memory block to get offlined. A driver t= hat * relies on this feature is aware that re-onlining the memory block will * require not giving them to the buddy via generic_online_page(). * + * Memory offlining code will not adjust the managed page count for any + * PageOffline() pages, treating them like they were never exposed to the + * buddy using generic_online_page(). + * * There are drivers that mark a page PageOffline() and expect there won't= be * any further access to page content. PFN walkers that read content of ra= ndom * pages should check PageOffline() and synchronize with such drivers using diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 0254059efcbe1..965707a02556f 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1941,7 +1941,7 @@ int __ref offline_pages(unsigned long start_pfn, unsi= gned long nr_pages, struct zone *zone, struct memory_group *group) { const unsigned long end_pfn =3D start_pfn + nr_pages; - unsigned long pfn, system_ram_pages =3D 0; + unsigned long pfn, managed_pages, system_ram_pages =3D 0; const int node =3D zone_to_nid(zone); unsigned long flags; struct memory_notify arg; @@ -2062,7 +2062,7 @@ int __ref offline_pages(unsigned long start_pfn, unsi= gned long nr_pages, } while (ret); =20 /* Mark all sections offline and remove free pages from the buddy. */ - __offline_isolated_pages(start_pfn, end_pfn); + managed_pages =3D __offline_isolated_pages(start_pfn, end_pfn); pr_debug("Offlined Pages %ld\n", nr_pages); =20 /* @@ -2078,7 +2078,7 @@ int __ref offline_pages(unsigned long start_pfn, unsi= gned long nr_pages, zone_pcp_enable(zone); =20 /* removal success */ - adjust_managed_page_count(pfn_to_page(start_pfn), -nr_pages); + adjust_managed_page_count(pfn_to_page(start_pfn), -managed_pages); adjust_present_page_count(pfn_to_page(start_pfn), group, -nr_pages); =20 /* reinitialise watermarks and update pcp limits */ diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 039bc52cc9091..809bc4a816e85 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -6745,14 +6745,19 @@ void zone_pcp_reset(struct zone *zone) /* * All pages in the range must be in a single zone, must not contain holes, * must span full sections, and must be isolated before calling this funct= ion. + * + * Returns the number of managed (non-PageOffline()) pages in the range: t= he + * number of pages for which memory offlining code must adjust managed page + * counters using adjust_managed_page_count(). */ -void __offline_isolated_pages(unsigned long start_pfn, unsigned long end_p= fn) +unsigned long __offline_isolated_pages(unsigned long start_pfn, + unsigned long end_pfn) { + unsigned long already_offline =3D 0, flags; unsigned long pfn =3D start_pfn; struct page *page; struct zone *zone; unsigned int order; - unsigned long flags; =20 offline_mem_sections(pfn, end_pfn); zone =3D page_zone(pfn_to_page(pfn)); @@ -6774,6 +6779,7 @@ void __offline_isolated_pages(unsigned long start_pfn= , unsigned long end_pfn) if (PageOffline(page)) { BUG_ON(page_count(page)); BUG_ON(PageBuddy(page)); + already_offline++; pfn++; continue; } @@ -6786,6 +6792,8 @@ void __offline_isolated_pages(unsigned long start_pfn= , unsigned long end_pfn) pfn +=3D (1 << order); } spin_unlock_irqrestore(&zone->lock, flags); + + return end_pfn - start_pfn - already_offline; } #endif =20 --=20 2.45.1