From nobody Sun Dec 14 13:49:50 2025 Received: from szxga07-in.huawei.com (szxga07-in.huawei.com [45.249.212.35]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EF53918D65F for ; Thu, 16 Jan 2025 06:24:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.35 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737008684; cv=none; b=Tfqm6keVaE7hMb4SSSavir5P1MQwloJYydAR6VkuMj0LM9mNiOOIEHFx04xZsgqDic6RtsrCxtmxAurg+SqfJflhopK4GbSsvTaEZaBNPPNaN1WTdsuzES67ciAO0lBFFL/wFu9XcCUHqNnIM7zn5bdUBfeqwWpowi72RNxDENc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737008684; c=relaxed/simple; bh=YIOyC5gcSTmaVIvtZz1fptJb+cf5sB4lg6wOLLn4uMI=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=S7SiU0xPhNXrq44FP9emM9J50WOeDLCdNuhWW9Gv8PsnUMohzY1VTlA9qHzqrtGN+Ul2jrp4zcwQg6ZXXSDCE5aGsHF67cXVvGEyNT7vFexcYIxSe6dFDJT3KgqoRhAb62fXgCQBglhrF1sDZfMyJFaV3RmwnOl/xPcOYwED4J8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.35 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.234]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4YYXn726VSz1V4tl; Thu, 16 Jan 2025 14:21:15 +0800 (CST) Received: from kwepemg100017.china.huawei.com (unknown [7.202.181.58]) by mail.maildlp.com (Postfix) with ESMTPS id 39D82140109; Thu, 16 Jan 2025 14:24:27 +0800 (CST) Received: from huawei.com (10.175.124.71) by kwepemg100017.china.huawei.com (7.202.181.58) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 16 Jan 2025 14:24:26 +0800 From: Wupeng Ma To: , , , , , CC: , , Subject: [PATCH v2 1/3] mm: memory-failure: update ttu flag inside unmap_poisoned_folio Date: Thu, 16 Jan 2025 14:16:55 +0800 Message-ID: <20250116061657.227027-2-mawupeng1@huawei.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250116061657.227027-1-mawupeng1@huawei.com> References: <20250116061657.227027-1-mawupeng1@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemg100017.china.huawei.com (7.202.181.58) Content-Type: text/plain; charset="utf-8" From: Ma Wupeng Commit 6da6b1d4a7df ("mm/hwpoison: convert TTU_IGNORE_HWPOISON to TTU_HWPOISON") introduce TTU_HWPOISON to replace TTU_IGNORE_HWPOISON in order to stop send SIGBUS signal when accessing an error page after a memory error on a clean folio. However during page migration, anon folio must be set with TTU_HWPOISON during unmap_*(). For pagecache we need some policy just like the one in hwpoison_user_mappings to set this flag. So move this policy from hwpoison_user_mappings to unmap_poisoned_folio to handle this waring properly. Waring will be produced during unamp poison folio with the following log: ------------[ cut here ]------------ WARNING: CPU: 1 PID: 365 at mm/rmap.c:1847 try_to_unmap_one+0x8fc/0xd3c Modules linked in: CPU: 1 UID: 0 PID: 365 Comm: bash Tainted: G W 6.13.0-rc1= -00018-gacdb4bbda7ab #42 Tainted: [W]=3DWARN Hardware name: QEMU QEMU Virtual Machine, BIOS 0.0.0 02/06/2015 pstate: 20400005 (nzCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=3D--) pc : try_to_unmap_one+0x8fc/0xd3c lr : try_to_unmap_one+0x3dc/0xd3c Call trace: try_to_unmap_one+0x8fc/0xd3c (P) try_to_unmap_one+0x3dc/0xd3c (L) rmap_walk_anon+0xdc/0x1f8 rmap_walk+0x3c/0x58 try_to_unmap+0x88/0x90 unmap_poisoned_folio+0x30/0xa8 do_migrate_range+0x4a0/0x568 offline_pages+0x5a4/0x670 memory_block_action+0x17c/0x374 memory_subsys_offline+0x3c/0x78 device_offline+0xa4/0xd0 state_store+0x8c/0xf0 dev_attr_store+0x18/0x2c sysfs_kf_write+0x44/0x54 kernfs_fop_write_iter+0x118/0x1a8 vfs_write+0x3a8/0x4bc ksys_write+0x6c/0xf8 __arm64_sys_write+0x1c/0x28 invoke_syscall+0x44/0x100 el0_svc_common.constprop.0+0x40/0xe0 do_el0_svc+0x1c/0x28 el0_svc+0x30/0xd0 el0t_64_sync_handler+0xc8/0xcc el0t_64_sync+0x198/0x19c ---[ end trace 0000000000000000 ]--- Fixes: 6da6b1d4a7df ("mm/hwpoison: convert TTU_IGNORE_HWPOISON to TTU_HWPOI= SON") Signed-off-by: Ma Wupeng Suggested-by: David Hildenbrand Acked-by: David Hildenbrand --- mm/internal.h | 5 ++-- mm/memory-failure.c | 61 +++++++++++++++++++++++---------------------- mm/memory_hotplug.c | 3 ++- 3 files changed, 36 insertions(+), 33 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 9826f7dce607..3caee67c0abd 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1102,7 +1102,7 @@ static inline int find_next_best_node(int node, nodem= ask_t *used_node_mask) * mm/memory-failure.c */ #ifdef CONFIG_MEMORY_FAILURE -void unmap_poisoned_folio(struct folio *folio, enum ttu_flags ttu); +int unmap_poisoned_folio(struct folio *folio, unsigned long pfn, bool must= _kill); void shake_folio(struct folio *folio); extern int hwpoison_filter(struct page *p); =20 @@ -1125,8 +1125,9 @@ unsigned long page_mapped_in_vma(const struct page *p= age, struct vm_area_struct *vma); =20 #else -static inline void unmap_poisoned_folio(struct folio *folio, enum ttu_flag= s ttu) +static inline int unmap_poisoned_folio(struct folio *folio, unsigned long = pfn, bool must_kill); { + return -EBUSY; } #endif =20 diff --git a/mm/memory-failure.c b/mm/memory-failure.c index a7b8ccd29b6f..b5212b6e330a 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1556,8 +1556,34 @@ static int get_hwpoison_page(struct page *p, unsigne= d long flags) return ret; } =20 -void unmap_poisoned_folio(struct folio *folio, enum ttu_flags ttu) +int unmap_poisoned_folio(struct folio *folio, unsigned long pfn, bool must= _kill) { + enum ttu_flags ttu =3D TTU_IGNORE_MLOCK | TTU_SYNC | TTU_HWPOISON; + struct address_space *mapping; + + if (folio_test_swapcache(folio)) { + pr_err("%#lx: keeping poisoned page in swap cache\n", pfn); + ttu &=3D ~TTU_HWPOISON; + } + + /* + * Propagate the dirty bit from PTEs to struct page first, because we + * need this to decide if we should kill or just drop the page. + * XXX: the dirty test could be racy: set_page_dirty() may not always + * be called inside page lock (it's recommended but not enforced). + */ + mapping =3D folio_mapping(folio); + if (!must_kill && !folio_test_dirty(folio) && mapping && + mapping_can_writeback(mapping)) { + if (folio_mkclean(folio)) { + folio_set_dirty(folio); + } else { + ttu &=3D ~TTU_HWPOISON; + pr_info("%#lx: corrupted page was clean: dropped without side effects\n= ", + pfn); + } + } + if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) { struct address_space *mapping; =20 @@ -1572,7 +1598,7 @@ void unmap_poisoned_folio(struct folio *folio, enum t= tu_flags ttu) if (!mapping) { pr_info("%#lx: could not lock mapping for mapped hugetlb folio\n", folio_pfn(folio)); - return; + return -EBUSY; } =20 try_to_unmap(folio, ttu|TTU_RMAP_LOCKED); @@ -1580,6 +1606,8 @@ void unmap_poisoned_folio(struct folio *folio, enum t= tu_flags ttu) } else { try_to_unmap(folio, ttu); } + + return folio_mapped(folio) ? -EBUSY : 0; } =20 /* @@ -1589,8 +1617,6 @@ void unmap_poisoned_folio(struct folio *folio, enum t= tu_flags ttu) static bool hwpoison_user_mappings(struct folio *folio, struct page *p, unsigned long pfn, int flags) { - enum ttu_flags ttu =3D TTU_IGNORE_MLOCK | TTU_SYNC | TTU_HWPOISON; - struct address_space *mapping; LIST_HEAD(tokill); bool unmap_success; int forcekill; @@ -1613,29 +1639,6 @@ static bool hwpoison_user_mappings(struct folio *fol= io, struct page *p, if (!folio_mapped(folio)) return true; =20 - if (folio_test_swapcache(folio)) { - pr_err("%#lx: keeping poisoned page in swap cache\n", pfn); - ttu &=3D ~TTU_HWPOISON; - } - - /* - * Propagate the dirty bit from PTEs to struct page first, because we - * need this to decide if we should kill or just drop the page. - * XXX: the dirty test could be racy: set_page_dirty() may not always - * be called inside page lock (it's recommended but not enforced). - */ - mapping =3D folio_mapping(folio); - if (!(flags & MF_MUST_KILL) && !folio_test_dirty(folio) && mapping && - mapping_can_writeback(mapping)) { - if (folio_mkclean(folio)) { - folio_set_dirty(folio); - } else { - ttu &=3D ~TTU_HWPOISON; - pr_info("%#lx: corrupted page was clean: dropped without side effects\n= ", - pfn); - } - } - /* * First collect all the processes that have the page * mapped in dirty form. This has to be done before try_to_unmap, @@ -1643,9 +1646,7 @@ static bool hwpoison_user_mappings(struct folio *foli= o, struct page *p, */ collect_procs(folio, p, &tokill, flags & MF_ACTION_REQUIRED); =20 - unmap_poisoned_folio(folio, ttu); - - unmap_success =3D !folio_mapped(folio); + unmap_success =3D !unmap_poisoned_folio(folio, pfn, flags & MF_MUST_KILL); if (!unmap_success) pr_err("%#lx: failed to unmap page (folio mapcount=3D%d)\n", pfn, folio_mapcount(folio)); diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index c43b4e7fb298..3de661e57e92 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1806,7 +1806,8 @@ static void do_migrate_range(unsigned long start_pfn,= unsigned long end_pfn) if (WARN_ON(folio_test_lru(folio))) folio_isolate_lru(folio); if (folio_mapped(folio)) - unmap_poisoned_folio(folio, TTU_IGNORE_MLOCK); + unmap_poisoned_folio(folio, pfn, false); + continue; } =20 --=20 2.43.0 From nobody Sun Dec 14 13:49:50 2025 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 86AD418D65F for ; Thu, 16 Jan 2025 06:24:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.191 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737008673; cv=none; b=BlKPbtMV1NfiBYtJcT6V9w3KUrLl2/wgtjRPWMYSgONfrsZU8PQUgZOzr0eLhsiJfu3MG1mislLbD7YuAEJ3ussreNjIxTBwrYOdcaQLOA+Yah1iZ758UsFXm2KgyN8XBxtyb4XjPDJVYhIux9gvxAlfJbEE04Df2KSotxGXrDs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737008673; c=relaxed/simple; bh=CGo6zGxMojt2bKEnKTb1bnvsjLO5SRbsIDibP2lHTaY=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=ZDU7qxfcbvYIvHxcSGp1/Y6h+/H30zT9+z0HzVHFqJd8DfG7GzOFZJDZEt3Bg0lBPmCxN7qPAye1nc765AyO4dVkwVfxmqVp46wTpY5RD+IBV3sMf2UQRL10JfictvK0GWOlH/iBFZAlL3SuUyv8yGARyhI+Kv01QzgCr0lb0Ww= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.191 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.162.112]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4YYXqp1nZKz1xmrf; Thu, 16 Jan 2025 14:23:34 +0800 (CST) Received: from kwepemg100017.china.huawei.com (unknown [7.202.181.58]) by mail.maildlp.com (Postfix) with ESMTPS id 0630E14034E; Thu, 16 Jan 2025 14:24:28 +0800 (CST) Received: from huawei.com (10.175.124.71) by kwepemg100017.china.huawei.com (7.202.181.58) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 16 Jan 2025 14:24:27 +0800 From: Wupeng Ma To: , , , , , CC: , , Subject: [PATCH v2 2/3] hwpoison, memory_hotplug: lock folio before unmap hwpoisoned folio Date: Thu, 16 Jan 2025 14:16:56 +0800 Message-ID: <20250116061657.227027-3-mawupeng1@huawei.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250116061657.227027-1-mawupeng1@huawei.com> References: <20250116061657.227027-1-mawupeng1@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemg100017.china.huawei.com (7.202.181.58) Content-Type: text/plain; charset="utf-8" From: Ma Wupeng Commit b15c87263a69 ("hwpoison, memory_hotplug: allow hwpoisoned pages to be offlined) add page poison checks in do_migrate_range in order to make offline hwpoisoned page possible by introducing isolate_lru_page and try_to_unmap for hwpoisoned page. However folio lock must be held before calling try_to_unmap. Add it to fix this problem. Waring will be produced if folio is not locked during unmap: ------------[ cut here ]------------ kernel BUG at ./include/linux/swapops.h:400! Internal error: Oops - BUG: 00000000f2000800 [#1] PREEMPT SMP Modules linked in: CPU: 4 UID: 0 PID: 411 Comm: bash Tainted: G W 6.13.0-rc1= -00016-g3c434c7ee82a-dirty #41 Tainted: [W]=3DWARN Hardware name: QEMU QEMU Virtual Machine, BIOS 0.0.0 02/06/2015 pstate: 40400005 (nZcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=3D--) pc : try_to_unmap_one+0xb08/0xd3c lr : try_to_unmap_one+0x3dc/0xd3c Call trace: try_to_unmap_one+0xb08/0xd3c (P) try_to_unmap_one+0x3dc/0xd3c (L) rmap_walk_anon+0xdc/0x1f8 rmap_walk+0x3c/0x58 try_to_unmap+0x88/0x90 unmap_poisoned_folio+0x30/0xa8 do_migrate_range+0x4a0/0x568 offline_pages+0x5a4/0x670 memory_block_action+0x17c/0x374 memory_subsys_offline+0x3c/0x78 device_offline+0xa4/0xd0 state_store+0x8c/0xf0 dev_attr_store+0x18/0x2c sysfs_kf_write+0x44/0x54 kernfs_fop_write_iter+0x118/0x1a8 vfs_write+0x3a8/0x4bc ksys_write+0x6c/0xf8 __arm64_sys_write+0x1c/0x28 invoke_syscall+0x44/0x100 el0_svc_common.constprop.0+0x40/0xe0 do_el0_svc+0x1c/0x28 el0_svc+0x30/0xd0 el0t_64_sync_handler+0xc8/0xcc el0t_64_sync+0x198/0x19c Code: f9407be0 b5fff320 d4210000 17ffff97 (d4210000) ---[ end trace 0000000000000000 ]--- Fixes: b15c87263a69 ("hwpoison, memory_hotplug: allow hwpoisoned pages to b= e offlined") Signed-off-by: Ma Wupeng Acked-by: David Hildenbrand --- mm/memory_hotplug.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 3de661e57e92..2815bd4ea483 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1805,8 +1805,11 @@ static void do_migrate_range(unsigned long start_pfn= , unsigned long end_pfn) (folio_test_large(folio) && folio_test_has_hwpoisoned(folio))) { if (WARN_ON(folio_test_lru(folio))) folio_isolate_lru(folio); - if (folio_mapped(folio)) + if (folio_mapped(folio)) { + folio_lock(folio); unmap_poisoned_folio(folio, pfn, false); + folio_unlock(folio); + } =20 continue; } --=20 2.43.0 From nobody Sun Dec 14 13:49:50 2025 Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EB531192D6B for ; Thu, 16 Jan 2025 06:24:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.190 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737008680; cv=none; b=O1KYCRNQF6FBPBwzO/GElAhSAnOT2jbz6G6CudM2YSqlljYJFZKI72zcEuP/qTi7+nZ0O+S1N+LeYz5TwnaOBZOEY+4CFb0FeG+TixYBrn2tB0XDQWeVO7YFWaCofzQqv9FFx3guNMMKh2QDAIzGv/83Ez1pTo0wYTy6uuZCu5w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737008680; c=relaxed/simple; bh=Vy+EQsbHKsZIP01TOT476yjOOr4iZ1sm7JIEoAACHZg=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=ubdJAJNlBlil16pY+sii2KIMs5l9x42iPwbBwKj5XAd7SS6z/mZ0a/lSRnJWBeGGbAP4dPGgPBXo6V1MfCxkEHeE89NqpULQtThEa7ygobJ/CeGBUlnY7YW/e1dP4HsLBsU/7Yx/Uh1WM9lLT0y1xTILDFv6sXEylaR51DmPn6M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.190 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.234]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4YYXp52GlWz22lDB; Thu, 16 Jan 2025 14:22:05 +0800 (CST) Received: from kwepemg100017.china.huawei.com (unknown [7.202.181.58]) by mail.maildlp.com (Postfix) with ESMTPS id C7C83140109; Thu, 16 Jan 2025 14:24:28 +0800 (CST) Received: from huawei.com (10.175.124.71) by kwepemg100017.china.huawei.com (7.202.181.58) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 16 Jan 2025 14:24:27 +0800 From: Wupeng Ma To: , , , , , CC: , , Subject: [PATCH v2 3/3] mm: memory-hotplug: check folio ref count first in do_migrate_rang Date: Thu, 16 Jan 2025 14:16:57 +0800 Message-ID: <20250116061657.227027-4-mawupeng1@huawei.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250116061657.227027-1-mawupeng1@huawei.com> References: <20250116061657.227027-1-mawupeng1@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemg100017.china.huawei.com (7.202.181.58) Content-Type: text/plain; charset="utf-8" From: Ma Wupeng If a folio has an increased reference count, folio_try_get() will acquire it, perform necessary operations, and then release it. In the case of a poisoned folio without an elevated reference count (which is unlikely for memory-failure), folio_try_get() will simply bypass it. Therefore, relocate the folio_try_get() function, responsible for checking and acquiring this reference count at first. Signed-off-by: Ma Wupeng --- mm/memory_hotplug.c | 14 ++++---------- 1 file changed, 4 insertions(+), 10 deletions(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 2815bd4ea483..3fb75ee185c6 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1786,6 +1786,9 @@ static void do_migrate_range(unsigned long start_pfn,= unsigned long end_pfn) page =3D pfn_to_page(pfn); folio =3D page_folio(page); =20 + if (!folio_try_get(folio)) + continue; + /* * No reference or lock is held on the folio, so it might * be modified concurrently (e.g. split). As such, @@ -1795,12 +1798,6 @@ static void do_migrate_range(unsigned long start_pfn= , unsigned long end_pfn) if (folio_test_large(folio)) pfn =3D folio_pfn(folio) + folio_nr_pages(folio) - 1; =20 - /* - * HWPoison pages have elevated reference counts so the migration would - * fail on them. It also doesn't make any sense to migrate them in the - * first place. Still try to unmap such a page in case it is still mapped - * (keep the unmap as the catch all safety net). - */ if (folio_test_hwpoison(folio) || (folio_test_large(folio) && folio_test_has_hwpoisoned(folio))) { if (WARN_ON(folio_test_lru(folio))) @@ -1811,12 +1808,9 @@ static void do_migrate_range(unsigned long start_pfn= , unsigned long end_pfn) folio_unlock(folio); } =20 - continue; + goto put_folio; } =20 - if (!folio_try_get(folio)) - continue; - if (unlikely(page_folio(page) !=3D folio)) goto put_folio; =20 --=20 2.43.0