From nobody Mon Feb 9 09:32:29 2026 Received: from mail-pj1-f51.google.com (mail-pj1-f51.google.com [209.85.216.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0C9C5238C15 for ; Sun, 25 Jan 2026 17:58:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769363921; cv=none; b=lTltfZpswkyTtF1p3ScWUAK/APDj04U6ALeq2T7grawitNn2ML2eiIuNUmTU3rqPN5QV9P01LsVgkU/XSZXpM0xudDaHGLqTm6kSThsCVMHpaIlTzp/xz7dm9UY2bVgXAVQozqIE2QHrJ+nP3/9kjUMmDGCVzHM87LUg+R93LpA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769363921; c=relaxed/simple; bh=duthAvrprtmW88C0Xisiob8i1iw8kK3zMTNLkXt6lqI=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=K9dGbURUxJPb/B8HRXxd9N1N5gNVilayyLjz+SJJZ3v0lYTGV82Ozxb1oZUJXfYAnTyEFcmfBX80pOkdduepLIMmop/1zha5oSor6nsnIfuHRTA2/kpWVmWVOGHq+Y+28fHwgU7d/fq+uZWyNABuond0EBqY3ggn7GgK7O0r68I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Fo4UhwA0; arc=none smtp.client-ip=209.85.216.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Fo4UhwA0" Received: by mail-pj1-f51.google.com with SMTP id 98e67ed59e1d1-34c868b197eso3429997a91.2 for ; Sun, 25 Jan 2026 09:58:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1769363919; x=1769968719; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=bBjV8XyXDLoEReDqKUKA22e4GI0fZeu23FHRrREbJdM=; b=Fo4UhwA0pGbVwEsXrKOQKU+qsT7V9/lczYBQ3n19KjK24gK2dU6onZqG+f9dVJLzyO CS0f5/YuZxFvxFeWQOaIRiCuCQJwslwvFdDA0ceZmawhqR2rvTJAmEqni6AJ1pv5HGpC DdMrjqdij+ynQ8D/ttzlwqjcwiwzN5PUhs99IJ51tEaX9g6JRzDf8dHTZ2aag9tdkRrG KVd1/vJ6P/lgcSgYGNeps8/3FtEJo0JzVTAjLtgZmyeRnlubtd8+gtXMM4Fbw+QM8AFw TWkiyHSfSHD5dxC43RrMqe0C/N0iftCTipYvspx7+qEfjWhrKUKPty2u6loOnsO4quM3 25eg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769363919; x=1769968719; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=bBjV8XyXDLoEReDqKUKA22e4GI0fZeu23FHRrREbJdM=; b=vBWkf46uuX5KCWryHkQQCzbrEgtGdyoUIe3UuuxA/mpGj8LJ+7ll/7kjIiUvbj7TBi cduLhIoRfYy4wI5k0qWE4f9dWLajqDnDFAuH8iutzppD7hgVPqt92HbcDLZu9dL70Qwi WUImtAKyQ9SBSkULBIP7ombegFUryrDGo41R8pPMdJ2p3v6MtnEB3/KUWPs9DleQfu4q rsRjudiHUxykpVwZY+VN7PLhyXu0yXJwqVYdJl9T+gpB6/0ZSCvtNWiSzflcAZ+OASJa /IeJsVpksvlmEaS+Svq2Jk2kOi/76GzO1Pc/gvFWrE1W78ggooNVHM0IKXqn6wYxKCOP 2rnA== X-Forwarded-Encrypted: i=1; AJvYcCV/I9xkXTXcg1PgN5gNdhxMPAE/5yHpg6exyBBZZF43AakFAH3Is5dzqSh5a6R75vjgOJ5JnleNUkdymho=@vger.kernel.org X-Gm-Message-State: AOJu0YxvWwNU4WSwb+9TRG42zVn4+qbofVv3XbgBJgchCiQooW3/zdvo MNnsKsKCZmPxfVS5vPXcKN4YxV9ZoConJIVXkMRA5FpZthSfKoUZ2HcDcnDVrQAM8EY= X-Gm-Gg: AZuq6aL07a9bNuF6elbbOUmGGdZ6eMRWlJPvsxJY9L4HszKhktrNi/4iMvsCDyOyn7U Pz9eyTKpOnh5z58C58RYYtbI4Fh85iwijvg7DvnJhk+FUq+OiIdieY2zCycef0s8deOSWn5DDh5 vcIS6lvTTnzILHiXF9/IQSL258CAM/L3dj3n8D8EKiEFIM5/NWPgGZZ0je5RIZSPXHm93y/pfNe oRRkpUKZInJLcPebbHNWWSHJ/BbE49CQPcezOkS/Pd46hjWhdIZgkdTanJImWZnhWnZ9c8C2SNd 0n5tW/Di2v45SQAfcTDYLnfpqYpVBzrtRjSYaWqP9Y/6bhxGX+zgKIF4a+ql1m1t+0MG9gmsbdk 56S9QzDx2TiAQXPguWvFSL5iFmw0d2UAr+Y4vHglaQuTvyyqdI9iEmP8djHLSCxGmBcoRMLB7Rm my4rMvGDR3FWPfjvdeTT6sm8X3o8cxHmyw9Ck721QTTkk/NAEs X-Received: by 2002:a17:90b:548e:b0:33b:b020:597a with SMTP id 98e67ed59e1d1-353c3fde050mr1632030a91.0.1769363919442; Sun, 25 Jan 2026 09:58:39 -0800 (PST) Received: from [127.0.0.1] ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-8231876e718sm7405963b3a.62.2026.01.25.09.58.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jan 2026 09:58:38 -0800 (PST) From: Kairui Song Date: Mon, 26 Jan 2026 01:57:31 +0800 Subject: [PATCH 08/12] mm, swap: simplify swap table sanity range check Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260126-swap-table-p3-v1-8-a74155fab9b0@tencent.com> References: <20260126-swap-table-p3-v1-0-a74155fab9b0@tencent.com> In-Reply-To: <20260126-swap-table-p3-v1-0-a74155fab9b0@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Johannes Weiner , David Hildenbrand , Lorenzo Stoakes , linux-kernel@vger.kernel.org, Chris Li , Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1769363877; l=3805; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=xEm9vmKYs9I8cGuUNSJYWeygrYtNmSS9LIdq7v6ecsc=; b=FCG72J3dYPeMjzlN2+p8N1F/IqZer4WAcNKJ/kLqN0BHkotDArPvXlR/hYqlVNWS4glF2dlTS 4NngxeKIwoFAIxg4FABZHBfwqiOh6oBbVULHlPkJ/sDZnYU4wNVFokZ X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= From: Kairui Song The newly introduced helper, which checks bad slots and emptiness of a cluster, can cover the older sanity check just fine, with a more rigorous condition check. So merge them. Signed-off-by: Kairui Song --- mm/swapfile.c | 35 +++++++++-------------------------- 1 file changed, 9 insertions(+), 26 deletions(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index bdce2abd9135..968153691fc4 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -459,9 +459,11 @@ static void swap_table_free(struct swap_table *table) * One special case is that bad slots can't be freed, so check the number = of * bad slots for swapoff, and non-swapoff path must never free bad slots. */ -static void swap_cluster_assert_empty(struct swap_cluster_info *ci, bool s= wapoff) +static void swap_cluster_assert_empty(struct swap_cluster_info *ci, + unsigned int ci_off, unsigned int nr, + bool swapoff) { - unsigned int ci_off =3D 0, ci_end =3D SWAPFILE_CLUSTER; + unsigned int ci_end =3D ci_off + nr; unsigned long swp_tb; int bad_slots =3D 0; =20 @@ -588,7 +590,7 @@ static void swap_cluster_schedule_discard(struct swap_i= nfo_struct *si, =20 static void __free_cluster(struct swap_info_struct *si, struct swap_cluste= r_info *ci) { - swap_cluster_assert_empty(ci, false); + swap_cluster_assert_empty(ci, 0, SWAPFILE_CLUSTER, false); swap_cluster_free_table(ci); move_cluster(si, ci, &si->free_clusters, CLUSTER_FLAG_FREE); ci->order =3D 0; @@ -898,26 +900,6 @@ static bool cluster_scan_range(struct swap_info_struct= *si, return true; } =20 -/* - * Currently, the swap table is not used for count tracking, just - * do a sanity check here to ensure nothing leaked, so the swap - * table should be empty upon freeing. - */ -static void swap_cluster_assert_table_empty(struct swap_cluster_info *ci, - unsigned int start, unsigned int nr) -{ - unsigned int ci_off =3D start % SWAPFILE_CLUSTER; - unsigned int ci_end =3D ci_off + nr; - unsigned long swp_tb; - - if (IS_ENABLED(CONFIG_DEBUG_VM)) { - do { - swp_tb =3D __swap_table_get(ci, ci_off); - VM_WARN_ON_ONCE(!swp_tb_is_null(swp_tb)); - } while (++ci_off < ci_end); - } -} - static bool cluster_alloc_range(struct swap_info_struct *si, struct swap_cluster_info *ci, struct folio *folio, @@ -943,13 +925,14 @@ static bool cluster_alloc_range(struct swap_info_stru= ct *si, if (likely(folio)) { order =3D folio_order(folio); nr_pages =3D 1 << order; + swap_cluster_assert_empty(ci, offset % SWAPFILE_CLUSTER, nr_pages, false= ); __swap_cache_add_folio(ci, folio, swp_entry(si->type, offset)); } else if (IS_ENABLED(CONFIG_HIBERNATION)) { order =3D 0; nr_pages =3D 1; WARN_ON_ONCE(si->swap_map[offset]); si->swap_map[offset] =3D 1; - swap_cluster_assert_table_empty(ci, offset, 1); + swap_cluster_assert_empty(ci, offset % SWAPFILE_CLUSTER, 1, false); } else { /* Allocation without folio is only possible with hibernation */ WARN_ON_ONCE(1); @@ -1768,7 +1751,7 @@ void swap_entries_free(struct swap_info_struct *si, =20 mem_cgroup_uncharge_swap(entry, nr_pages); swap_range_free(si, offset, nr_pages); - swap_cluster_assert_table_empty(ci, offset, nr_pages); + swap_cluster_assert_empty(ci, offset % SWAPFILE_CLUSTER, nr_pages, false); =20 if (!ci->count) free_cluster(si, ci); @@ -2769,7 +2752,7 @@ static void free_swap_cluster_info(struct swap_cluste= r_info *cluster_info, /* Cluster with bad marks count will have a remaining table */ spin_lock(&ci->lock); if (rcu_dereference_protected(ci->table, true)) { - swap_cluster_assert_empty(ci, true); + swap_cluster_assert_empty(ci, 0, SWAPFILE_CLUSTER, true); swap_cluster_free_table(ci); } spin_unlock(&ci->lock); --=20 2.52.0