From nobody Wed Apr 15 23:28:33 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84F71C4332F for ; Tue, 22 Nov 2022 13:13:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233248AbiKVNNK (ORCPT ); Tue, 22 Nov 2022 08:13:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42044 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233407AbiKVNM4 (ORCPT ); Tue, 22 Nov 2022 08:12:56 -0500 Received: from outbound-smtp12.blacknight.com (outbound-smtp12.blacknight.com [46.22.139.17]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 15FB72715B for ; Tue, 22 Nov 2022 05:12:53 -0800 (PST) Received: from mail.blacknight.com (pemlinmail02.blacknight.ie [81.17.254.11]) by outbound-smtp12.blacknight.com (Postfix) with ESMTPS id A1E6E1C37FA for ; Tue, 22 Nov 2022 13:12:52 +0000 (GMT) Received: (qmail 21501 invoked from network); 22 Nov 2022 13:12:52 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 22 Nov 2022 13:12:52 -0000 From: Mel Gorman To: Andrew Morton Cc: Hugh Dickins , Yu Zhao , Vlastimil Babka , Marcelo Tosatti , Michal Hocko , Marek Szyprowski , LKML , Linux-MM , Mel Gorman Subject: [PATCH 1/2] mm/page_alloc: Leave IRQs enabled for per-cpu page allocations -fix Date: Tue, 22 Nov 2022 13:12:28 +0000 Message-Id: <20221122131229.5263-2-mgorman@techsingularity.net> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20221122131229.5263-1-mgorman@techsingularity.net> References: <20221122131229.5263-1-mgorman@techsingularity.net> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" As noted by Vlastimil Babka, the migratetype might be wrong if a PCP was not locked so check the migrate type early. Similarly the !pcp check is generally unlikely so explicitly tagging it makes sense. This is a fix for the mm-unstable patch mm-page_alloc-leave-irqs-enabled-for-per-cpu-page-allocations.patch Reported-by: Vlastimil Babka Signed-off-by: Mel Gorman Acked-by: Vlastimil Babka --- mm/page_alloc.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 323fec05c4c6..445066617204 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3516,6 +3516,7 @@ void free_unref_page_list(struct list_head *list) struct zone *zone =3D page_zone(page); =20 list_del(&page->lru); + migratetype =3D get_pcppage_migratetype(page); =20 /* Different zone, different pcp lock. */ if (zone !=3D locked_zone) { @@ -3530,7 +3531,7 @@ void free_unref_page_list(struct list_head *list) */ pcp_trylock_prepare(UP_flags); pcp =3D pcp_spin_trylock(zone->per_cpu_pageset); - if (!pcp) { + if (unlikely(!pcp)) { pcp_trylock_finish(UP_flags); free_one_page(zone, page, page_to_pfn(page), 0, migratetype, FPI_NONE); @@ -3545,7 +3546,6 @@ void free_unref_page_list(struct list_head *list) * Non-isolated types over MIGRATE_PCPTYPES get added * to the MIGRATE_MOVABLE pcp list. */ - migratetype =3D get_pcppage_migratetype(page); if (unlikely(migratetype >=3D MIGRATE_PCPTYPES)) migratetype =3D MIGRATE_MOVABLE; =20 --=20 2.35.3 From nobody Wed Apr 15 23:28:33 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ECAEAC433FE for ; Tue, 22 Nov 2022 13:13:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233610AbiKVNNT (ORCPT ); Tue, 22 Nov 2022 08:13:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42048 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233390AbiKVNNH (ORCPT ); Tue, 22 Nov 2022 08:13:07 -0500 Received: from outbound-smtp19.blacknight.com (outbound-smtp19.blacknight.com [46.22.139.246]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 989562654A for ; Tue, 22 Nov 2022 05:13:04 -0800 (PST) Received: from mail.blacknight.com (pemlinmail02.blacknight.ie [81.17.254.11]) by outbound-smtp19.blacknight.com (Postfix) with ESMTPS id 467661C3841 for ; Tue, 22 Nov 2022 13:13:03 +0000 (GMT) Received: (qmail 22136 invoked from network); 22 Nov 2022 13:13:03 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 22 Nov 2022 13:13:02 -0000 From: Mel Gorman To: Andrew Morton Cc: Hugh Dickins , Yu Zhao , Vlastimil Babka , Marcelo Tosatti , Michal Hocko , Marek Szyprowski , LKML , Linux-MM , Mel Gorman Subject: [PATCH 2/2] mm/page_alloc: Simplify locking during free_unref_page_list Date: Tue, 22 Nov 2022 13:12:29 +0000 Message-Id: <20221122131229.5263-3-mgorman@techsingularity.net> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20221122131229.5263-1-mgorman@techsingularity.net> References: <20221122131229.5263-1-mgorman@techsingularity.net> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" While freeing a large list, the zone lock will be released and reacquired to avoid long hold times since commit c24ad77d962c ("mm/page_alloc.c: avoid excessive IRQ disabled times in free_unref_page_list()"). As suggested by Vlastimil Babka, the lockrelease/reacquire logic can be simplified by reusing the logic that acquires a different lock when changing zones. Signed-off-by: Mel Gorman Reviewed-by: Vlastimil Babka --- mm/page_alloc.c | 25 +++++++++---------------- 1 file changed, 9 insertions(+), 16 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 445066617204..08e32daf0918 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3518,13 +3518,19 @@ void free_unref_page_list(struct list_head *list) list_del(&page->lru); migratetype =3D get_pcppage_migratetype(page); =20 - /* Different zone, different pcp lock. */ - if (zone !=3D locked_zone) { + /* + * Either different zone requiring a different pcp lock or + * excessive lock hold times when freeing a large list of + * pages. + */ + if (zone !=3D locked_zone || batch_count =3D=3D SWAP_CLUSTER_MAX) { if (pcp) { pcp_spin_unlock(pcp); pcp_trylock_finish(UP_flags); } =20 + batch_count =3D 0; + /* * trylock is necessary as pages may be getting freed * from IRQ or SoftIRQ context after an IO completion. @@ -3539,7 +3545,6 @@ void free_unref_page_list(struct list_head *list) continue; } locked_zone =3D zone; - batch_count =3D 0; } =20 /* @@ -3551,19 +3556,7 @@ void free_unref_page_list(struct list_head *list) =20 trace_mm_page_free_batched(page); free_unref_page_commit(zone, pcp, page, migratetype, 0); - - /* - * Guard against excessive lock hold times when freeing - * a large list of pages. Lock will be reacquired if - * necessary on the next iteration. - */ - if (++batch_count =3D=3D SWAP_CLUSTER_MAX) { - pcp_spin_unlock(pcp); - pcp_trylock_finish(UP_flags); - batch_count =3D 0; - pcp =3D NULL; - locked_zone =3D NULL; - } + batch_count++; } =20 if (pcp) { --=20 2.35.3