From nobody Thu Apr 2 19:17:22 2026 Received: from mail.ilvokhin.com (mail.ilvokhin.com [178.62.254.231]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2F80744B68C; Fri, 27 Feb 2026 16:01:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=178.62.254.231 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772208084; cv=none; b=oJNb9FjWY7tgpjlrk1tPLTvWpPQW7djzjWNpCAwxEHA28ET53VdUS077JmEgS7O7DGgTLiW9fXlvAT8Dd6yLvExFcae+8P55BQn0O3sxVUx8KkhP9Za8lK6jNRIwOOFrlgEOU7OCAquANAcxK/98zYAXPwrWAb9zO9FZnwRCt/o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772208084; c=relaxed/simple; bh=ZcJzttTla36AEwNQEeCer2hsYrug/ixY4GOjHxf5e1M=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Vw78oP9TMGSdInXFggWB65iOJhV8TAiQqCWoWGH00lKFdcOM/pzv45itD5xBqSSOtbT0HhApQP4e4rqk+QInpmWgy3w0+0IqGMffgcs7maVB+XSwY0iEwoY3svnTaG5gtKo23XLauMeOQEQp3axljbL+WYSbNhLzynRywmsJvXQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=ilvokhin.com; spf=pass smtp.mailfrom=ilvokhin.com; dkim=pass (1024-bit key) header.d=ilvokhin.com header.i=@ilvokhin.com header.b=Ads08GM4; arc=none smtp.client-ip=178.62.254.231 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=ilvokhin.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ilvokhin.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=ilvokhin.com header.i=@ilvokhin.com header.b="Ads08GM4" Received: from localhost.localdomain (shell.ilvokhin.com [138.68.190.75]) (Authenticated sender: d@ilvokhin.com) by mail.ilvokhin.com (Postfix) with ESMTPSA id 0585FB2E3D; Fri, 27 Feb 2026 16:01:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ilvokhin.com; s=mail; t=1772208073; bh=724OnEo1UU3p7rmjBisDMAoZBikcU2JAJeLCaC7K+74=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=Ads08GM4QklHSYJg9JjAkfvClZh1PdL7s1Zom5IVbi5MR2IUz7x39vX+WPonpa2do XymKHWAACKH16MY+O509a/c3myOo4i3GuMdw9XDKAiX5qogH4UEqNnyozv41v0vvnE zeRCXUuEUfFmCsPcLUnZPIpqjCuzwWNEy/rtSlrE= From: Dmitry Ilvokhin To: Andrew Morton , David Hildenbrand , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Axel Rasmussen , Yuanchu Xie , Wei Xu , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , "Rafael J. Wysocki" , Pavel Machek , Len Brown , Brendan Jackman , Johannes Weiner , Zi Yan , Oscar Salvador , Qi Zheng , Shakeel Butt Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-pm@vger.kernel.org, "linux-cxl@vger.kernel.orgkernel-team"@meta.com, Dmitry Ilvokhin Subject: [PATCH v4 3/5] mm: convert compaction to zone lock wrappers Date: Fri, 27 Feb 2026 16:00:25 +0000 Message-ID: <3a09e46f52cf9f709b0725bc2b648cc5212843b2.1772206930.git.d@ilvokhin.com> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Compaction uses compact_lock_irqsave(), which currently operates on a raw spinlock_t pointer so it can be used for both zone->lock and lruvec->lru_lock. Since zone lock operations are now wrapped, compact_lock_irqsave() can no longer directly operate on a spinlock_t when the lock belongs to a zone. Split the helper into compact_zone_lock_irqsave() and compact_lruvec_lock_irqsave(), duplicating the small amount of shared logic. As there are only two call sites and both statically know the lock type, this avoids introducing additional abstraction or runtime dispatch in the compaction path. No functional change intended. Signed-off-by: Dmitry Ilvokhin Acked-by: Shakeel Butt Acked-by: David Hildenbrand (Arm) Acked-by: Zi Yan Reviewed-by: SeongJae Park Reviewed-by: Vlastimil Babka (SUSE) --- mm/compaction.c | 33 ++++++++++++++++++++++++--------- 1 file changed, 24 insertions(+), 9 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index fa0e332a8a92..c68fcc416fc7 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -503,19 +503,36 @@ static bool test_and_set_skip(struct compact_control = *cc, struct page *page) * * Always returns true which makes it easier to track lock state in caller= s. */ -static bool compact_lock_irqsave(spinlock_t *lock, unsigned long *flags, - struct compact_control *cc) - __acquires(lock) +static bool compact_zone_lock_irqsave(struct zone *zone, + unsigned long *flags, + struct compact_control *cc) + __acquires(&zone->lock) { /* Track if the lock is contended in async mode */ if (cc->mode =3D=3D MIGRATE_ASYNC && !cc->contended) { - if (spin_trylock_irqsave(lock, *flags)) + if (zone_trylock_irqsave(zone, *flags)) return true; =20 cc->contended =3D true; } =20 - spin_lock_irqsave(lock, *flags); + zone_lock_irqsave(zone, *flags); + return true; +} + +static bool compact_lruvec_lock_irqsave(struct lruvec *lruvec, + unsigned long *flags, + struct compact_control *cc) + __acquires(&lruvec->lru_lock) +{ + if (cc->mode =3D=3D MIGRATE_ASYNC && !cc->contended) { + if (spin_trylock_irqsave(&lruvec->lru_lock, *flags)) + return true; + + cc->contended =3D true; + } + + spin_lock_irqsave(&lruvec->lru_lock, *flags); return true; } =20 @@ -531,7 +548,6 @@ static bool compact_lock_irqsave(spinlock_t *lock, unsi= gned long *flags, * Returns true if compaction should abort due to fatal signal pending. * Returns false when compaction can continue. */ - static bool compact_unlock_should_abort(struct zone *zone, unsigned long flags, bool *locked, @@ -616,8 +632,7 @@ static unsigned long isolate_freepages_block(struct com= pact_control *cc, =20 /* If we already hold the lock, we can skip some rechecking. */ if (!locked) { - locked =3D compact_lock_irqsave(&cc->zone->lock, - &flags, cc); + locked =3D compact_zone_lock_irqsave(cc->zone, &flags, cc); =20 /* Recheck this is a buddy page under lock */ if (!PageBuddy(page)) @@ -1163,7 +1178,7 @@ isolate_migratepages_block(struct compact_control *cc= , unsigned long low_pfn, if (locked) unlock_page_lruvec_irqrestore(locked, flags); =20 - compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); + compact_lruvec_lock_irqsave(lruvec, &flags, cc); locked =3D lruvec; =20 lruvec_memcg_debug(lruvec, folio); --=20 2.47.3