From nobody Fri Apr 3 01:24:30 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 721B33783D0 for ; Tue, 17 Feb 2026 20:06:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771358809; cv=none; b=AeysP1qtTFcU6i9uotAnaZwDJNyk4kiG5UDWarosAYKMotPtt5QgWdNB7X2KhvGmOLQ1Zrz9/HRspUR2Q+lw5fV2PDLeV5Xn2S0k2c6jUW3QHhS52PKZIGaGc41q0jUgoslIEY0VOec601TEnjq6xRz0dqo9uQaty4VB93OtCyc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771358809; c=relaxed/simple; bh=2yeFedgdCFAimPY5RI1c2qVMLTt5kk1HGr79YRFFwtU=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=PzHqRBQVEkMbNwCFLrqJ+2p1sn9HLNKEx7vnQ0kI4LVWXUO1FHvGyftSKyO8kmOb/ZaqoHIec9gt4MICaUz0bpal8Hz/kuuNg0WCQPP7eCvnPdRwQhaw8D+DImDU/SHTcDKvSLfksZWlMhpt7ug65Dzyk0nqwdBONNkP9HyMYPE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=QrJBFgyJ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="QrJBFgyJ" Received: by smtp.kernel.org (Postfix) with ESMTPS id 536F1C4CEF7; Tue, 17 Feb 2026 20:06:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1771358809; bh=2yeFedgdCFAimPY5RI1c2qVMLTt5kk1HGr79YRFFwtU=; h=From:Date:Subject:References:In-Reply-To:To:Cc:Reply-To:From; b=QrJBFgyJRSoQWN8tcT2gCT0mUhrxFLY8aNOi2qK2FIVSZiRFHNs8ZaCvyHDcAQUPg QA1Gaz/KbDQBBd54Z9crUJnr+stnwwUKRuGSzSUCKI1+hrQFNLfL1p+LQ3LdAdvLtu vwnYpKR5uj4fXL9JY1yF022bIDfu10RMc6+/CR0qoLLxQwHCeFTNI+8kOykXN8YHQE BkyIrlXcU/V5jLjsX7onMka4x8WPZSHLRPVWEV9yu0nyZ/cc3UUUmi9aST9g/4EnYa Uxf9yPsUUjb4+eNrI0lwJiqiKb9oeFbpUAoGNqzkVFqUX0ugDfuapCoYsvC2CbHrmN L+cl8KHVebk+w== Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C6E3E9A045; Tue, 17 Feb 2026 20:06:49 +0000 (UTC) From: Kairui Song via B4 Relay Date: Wed, 18 Feb 2026 04:06:33 +0800 Subject: [PATCH v3 08/12] mm, swap: simplify swap table sanity range check Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260218-swap-table-p3-v3-8-f4e34be021a7@tencent.com> References: <20260218-swap-table-p3-v3-0-f4e34be021a7@tencent.com> In-Reply-To: <20260218-swap-table-p3-v3-0-f4e34be021a7@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Johannes Weiner , David Hildenbrand , Lorenzo Stoakes , Youngjun Park , linux-kernel@vger.kernel.org, Chris Li , Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1771358806; l=3805; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=qW7kb00sZvxT2nbJUP3ZxQKk6L3Y2uUbqLzNxOvxoKA=; b=thXuehseP5TvVLIZT7SlJc47FJQLJOHUXv+3J+61ZMtKTaqy6ubJEP0l/UB1/t0VDTvGZoAlJ i+qDWPRAg2sAyOjvyOYI4XjyZN7GQyz2evi1F7eu5m19Ps6HnF7Beb0 X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= X-Endpoint-Received: by B4 Relay for kasong@tencent.com/kasong-sign-tencent with auth_id=562 X-Original-From: Kairui Song Reply-To: kasong@tencent.com From: Kairui Song The newly introduced helper, which checks bad slots and emptiness of a cluster, can cover the older sanity check just fine, with a more rigorous condition check. So merge them. Signed-off-by: Kairui Song Acked-by: Chris Li --- mm/swapfile.c | 35 +++++++++-------------------------- 1 file changed, 9 insertions(+), 26 deletions(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index 18bacf16cd26..9057fb3e4eed 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -459,9 +459,11 @@ static void swap_table_free(struct swap_table *table) * One special case is that bad slots can't be freed, so check the number = of * bad slots for swapoff, and non-swapoff path must never free bad slots. */ -static void swap_cluster_assert_empty(struct swap_cluster_info *ci, bool s= wapoff) +static void swap_cluster_assert_empty(struct swap_cluster_info *ci, + unsigned int ci_off, unsigned int nr, + bool swapoff) { - unsigned int ci_off =3D 0, ci_end =3D SWAPFILE_CLUSTER; + unsigned int ci_end =3D ci_off + nr; unsigned long swp_tb; int bad_slots =3D 0; =20 @@ -588,7 +590,7 @@ static void swap_cluster_schedule_discard(struct swap_i= nfo_struct *si, =20 static void __free_cluster(struct swap_info_struct *si, struct swap_cluste= r_info *ci) { - swap_cluster_assert_empty(ci, false); + swap_cluster_assert_empty(ci, 0, SWAPFILE_CLUSTER, false); swap_cluster_free_table(ci); move_cluster(si, ci, &si->free_clusters, CLUSTER_FLAG_FREE); ci->order =3D 0; @@ -898,26 +900,6 @@ static bool cluster_scan_range(struct swap_info_struct= *si, return true; } =20 -/* - * Currently, the swap table is not used for count tracking, just - * do a sanity check here to ensure nothing leaked, so the swap - * table should be empty upon freeing. - */ -static void swap_cluster_assert_table_empty(struct swap_cluster_info *ci, - unsigned int start, unsigned int nr) -{ - unsigned int ci_off =3D start % SWAPFILE_CLUSTER; - unsigned int ci_end =3D ci_off + nr; - unsigned long swp_tb; - - if (IS_ENABLED(CONFIG_DEBUG_VM)) { - do { - swp_tb =3D __swap_table_get(ci, ci_off); - VM_WARN_ON_ONCE(!swp_tb_is_null(swp_tb)); - } while (++ci_off < ci_end); - } -} - static bool cluster_alloc_range(struct swap_info_struct *si, struct swap_cluster_info *ci, struct folio *folio, @@ -943,13 +925,14 @@ static bool cluster_alloc_range(struct swap_info_stru= ct *si, if (likely(folio)) { order =3D folio_order(folio); nr_pages =3D 1 << order; + swap_cluster_assert_empty(ci, offset % SWAPFILE_CLUSTER, nr_pages, false= ); __swap_cache_add_folio(ci, folio, swp_entry(si->type, offset)); } else if (IS_ENABLED(CONFIG_HIBERNATION)) { order =3D 0; nr_pages =3D 1; WARN_ON_ONCE(si->swap_map[offset]); si->swap_map[offset] =3D 1; - swap_cluster_assert_table_empty(ci, offset, 1); + swap_cluster_assert_empty(ci, offset % SWAPFILE_CLUSTER, 1, false); } else { /* Allocation without folio is only possible with hibernation */ WARN_ON_ONCE(1); @@ -1768,7 +1751,7 @@ void swap_entries_free(struct swap_info_struct *si, =20 mem_cgroup_uncharge_swap(entry, nr_pages); swap_range_free(si, offset, nr_pages); - swap_cluster_assert_table_empty(ci, offset, nr_pages); + swap_cluster_assert_empty(ci, offset % SWAPFILE_CLUSTER, nr_pages, false); =20 if (!ci->count) free_cluster(si, ci); @@ -2780,7 +2763,7 @@ static void free_swap_cluster_info(struct swap_cluste= r_info *cluster_info, /* Cluster with bad marks count will have a remaining table */ spin_lock(&ci->lock); if (rcu_dereference_protected(ci->table, true)) { - swap_cluster_assert_empty(ci, true); + swap_cluster_assert_empty(ci, 0, SWAPFILE_CLUSTER, true); swap_cluster_free_table(ci); } spin_unlock(&ci->lock); --=20 2.52.0