From nobody Tue Dec 2 00:45:59 2025 Received: from mail-pf1-f169.google.com (mail-pf1-f169.google.com [209.85.210.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D45062DE1E4 for ; Mon, 24 Nov 2025 19:16:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.169 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764011789; cv=none; b=opM9wSpfrNJ0u2/HSpwz7RQVlozAcU5aQbcX2+ggmTdLbm26j6JUmxyYQXo7HS9xxe4atCA6vzXRMUqaEYnPqEd/6eOKbTR0BxDEkJDBsMb9c2zJK25iNb9twwiuBF6GjAazgNRPSQeo1b/qpcnX/Htlct/toxXrDp3R9p3Seqg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764011789; c=relaxed/simple; bh=nwrUqbmghzFF0bV5vQ1bPGx6AcmszDgsinjIPzKXTKc=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=chw0oDFqIq98q4RTOL5uUJez5kvm9DKXbfbTDShcmHcKGRIMiMcu0HynJl9U/oKi9es1mFxme7s0fcc4+NKnbKz0fBaXqo2yJ3ic4vgf2IR/ixp7nHtKXty/AdEOYkBk/2jMtgUURxXLVB24It9zfVWtfR0xFwYQIY4vh57HA08= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=YH5u0mAM; arc=none smtp.client-ip=209.85.210.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="YH5u0mAM" Received: by mail-pf1-f169.google.com with SMTP id d2e1a72fcca58-7b22ffa2a88so4289103b3a.1 for ; Mon, 24 Nov 2025 11:16:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1764011787; x=1764616587; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=eOxnIY0a99Y52eqXzKdRmg80jUOJW+tE1KL/X+5vX9M=; b=YH5u0mAMhN3W8TR2j6MKPuUJHG5LqSlAALq4kxweKdRLl/4TYkPY9OgxNjgzBwsHB+ HZFYKwAUk+8B6GMXw5zz6QsZxwI49AwO43HThqBG9jhCqaf6m+kiTNOjKwjwtnMSvMo6 4naU5w4JXLnqLqDFz/kZwo3ezGjH5uVlCvhfQ7xVip7Pn5S8t3nQCtZLzzEgQEq70AdD wuOsWaX2PAmofyHv92IeSoQ+LWdfTzSxGvXuVhktI+wZB9KmaqYD0TMUNLbOGRCOejEQ fB801V5gno4H8LGaAIU7zvoo/rDYDQjDkpdaEn+Dc6RqLoX/joz7hgwNWUMz7E3mYQfq JA1w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1764011787; x=1764616587; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=eOxnIY0a99Y52eqXzKdRmg80jUOJW+tE1KL/X+5vX9M=; b=DffxJ2GG9vhNztN8VXakD7Y/ELPzgq5RsMy809kf+PX9N3I75VgPh3yXp71FZ6EQ+q fo2vPYfWZrF1lTWNGNJx/uzZTRJB7bZ84Ii+anfuO0+iQIrAUZeqhe7F/5dA0kvkTD2q RuhtPsJTyyzQuM8b7zfAVSScRCX7VRijrMjlwQrolDGUsSmVnBgUkTaVX2oYQRQ8OKM9 IskEkR3fesLRnjj8N9W12kkN2dfxxwSTgNO/9Z8jZ4fgn2VILkC+DnWImBptL93/WZRh BJysfX+0y251BGpW9luCgU53Z1qdIdUgXAVuNGdty1iaVx3/SjrKCElaICJ0SF+a7HMF B9TA== X-Forwarded-Encrypted: i=1; AJvYcCX84pAzWwjc1Xvd1dzjzvRDcuol1Ywu/+L8XvEtoGCVWaTJMttkZWklwjFDGlf+BaXxPOownh9Cnz/DT9k=@vger.kernel.org X-Gm-Message-State: AOJu0YznRXtbHY+FqDbQpy+7AE4YfVzke+G6Sh5gY7ivRsjCxI8YrJcT kiKUfDQOkg4aYQNuui4mAmcWbkDZhws0OYkBCLen/tgCAiBdN/Uisx7p X-Gm-Gg: ASbGnct0FqxLDT8/OYhHIxwFZO0mLAZjIn9v8mMhSlCWLNx8SM5qCTL9a2fNdkTx9ag tCuVghjYSHLJagHoDpOxJtSslo414JSfNChzoKnHVLFoYnZjic2M9LS8Pk/E8+SHRznAmlL2hjx 0XGHgBEmF0cRIs81iSI+uq91UwpC5cMuCIvt5qstl6UkIOziHp2paT4iTPB89KXxQ1jZXhRLehk 30p5BGiB7SkXx0iCmuFIN3vk4+caZV4Gpa0OMsR2hV9L6g7OZfZhpeBeGy/ZrmUKUClHHP4neKN g1t6UheefxGq8+jPJ8lXRuWaMrjkFoxo0lSIH6CgyrHKnioKIrmnYHW31wyXZ/s3fdAmfCoB0hA 8HqUQyIpwqgsQAcTF/nVSq1UKC9YVzsB4aummNjArqi4QTQ2LXiqtqaJID5z0rA08Q1CbE7yAAa cV1uLXss+E93rzeAxVLcxdZU9edWRh5FCdF9JLvC34CjPhTxIv X-Google-Smtp-Source: AGHT+IH2rOQ7eWjFBbbc/91093MxswZ+bcVKJjX3xC+3hLOctGc8E18g829dn1xUDBoO5jNLcrGkVQ== X-Received: by 2002:a05:6a20:7f8f:b0:35f:aa1b:bc02 with SMTP id adf61e73a8af0-36150ead3e1mr13261971637.26.1764011787031; Mon, 24 Nov 2025 11:16:27 -0800 (PST) Received: from [127.0.0.1] ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-bd75def75ffsm14327479a12.3.2025.11.24.11.16.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Nov 2025 11:16:26 -0800 (PST) From: Kairui Song Date: Tue, 25 Nov 2025 03:13:53 +0800 Subject: [PATCH v3 10/19] mm, swap: consolidate cluster reclaim and usability check Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251125-swap-table-p2-v3-10-33f54f707a5c@tencent.com> References: <20251125-swap-table-p2-v3-0-33f54f707a5c@tencent.com> In-Reply-To: <20251125-swap-table-p2-v3-0-33f54f707a5c@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Baoquan He , Barry Song , Chris Li , Nhat Pham , Yosry Ahmed , David Hildenbrand , Johannes Weiner , Youngjun Park , Hugh Dickins , Baolin Wang , Ying Huang , Kemeng Shi , Lorenzo Stoakes , "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1764011730; l=4270; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=aUyVC2as+Gs9TiZQkBGnriVQMvOjS3mdGW8SaiNYelI=; b=6jf8DU9Y71Vc6lenDn92r1r5/qxPu/KMRxJuU4xK4VDA/xkkD2eazk3yHm0sPCCjy3iWn+QOD XI5kgQyF/tUDszPynUPDa5uCNxVhoVjJzHIDCiOXwAJ2FLfISQSAbFW X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= From: Kairui Song Swap cluster cache reclaim requires releasing the lock, so the cluster may become unusable after the reclaim. To prepare for checking swap cache using the swap table directly, consolidate the swap cluster reclaim and the check logic. We will want to avoid touching the cluster's data completely with the swap table, to avoid RCU overhead here. And by moving the cluster usable check into the reclaim helper, it will also help avoid a redundant scan of the slots if the cluster is no longer usable, and we will want to avoid touching the cluster. Also, adjust it very slightly while at it: always scan the whole region during reclaim, don't skip slots covered by a reclaimed folio. Because the reclaim is lockless, it's possible that new cache lands at any time. And for allocation, we want all caches to be reclaimed to avoid fragmentation. Besides, if the scan offset is not aligned with the size of the reclaimed folio, we might skip some existing cache and fail the reclaim unexpectedly. There should be no observable behavior change. It might slightly improve the fragmentation issue or performance. Signed-off-by: Kairui Song --- mm/swapfile.c | 45 +++++++++++++++++++++++++++++---------------- 1 file changed, 29 insertions(+), 16 deletions(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index cb59930b6415..bdbdb4a4c452 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -777,33 +777,51 @@ static int swap_cluster_setup_bad_slot(struct swap_cl= uster_info *cluster_info, return 0; } =20 +/* + * Reclaim drops the ci lock, so the cluster may become unusable (freed or + * stolen by a lower order). @usable will be set to false if that happens. + */ static bool cluster_reclaim_range(struct swap_info_struct *si, struct swap_cluster_info *ci, - unsigned long start, unsigned long end) + unsigned long start, unsigned int order, + bool *usable) { + unsigned int nr_pages =3D 1 << order; + unsigned long offset =3D start, end =3D start + nr_pages; unsigned char *map =3D si->swap_map; - unsigned long offset =3D start; int nr_reclaim; =20 spin_unlock(&ci->lock); do { switch (READ_ONCE(map[offset])) { case 0: - offset++; break; case SWAP_HAS_CACHE: nr_reclaim =3D __try_to_reclaim_swap(si, offset, TTRS_ANYWAY); - if (nr_reclaim > 0) - offset +=3D nr_reclaim; - else + if (nr_reclaim < 0) goto out; break; default: goto out; } - } while (offset < end); + } while (++offset < end); out: spin_lock(&ci->lock); + + /* + * We just dropped ci->lock so cluster could be used by another + * order or got freed, check if it's still usable or empty. + */ + if (!cluster_is_usable(ci, order)) { + *usable =3D false; + return false; + } + *usable =3D true; + + /* Fast path, no need to scan if the whole cluster is empty */ + if (cluster_is_empty(ci)) + return true; + /* * Recheck the range no matter reclaim succeeded or not, the slot * could have been be freed while we are not holding the lock. @@ -900,9 +918,10 @@ static unsigned int alloc_swap_scan_cluster(struct swa= p_info_struct *si, unsigned long start =3D ALIGN_DOWN(offset, SWAPFILE_CLUSTER); unsigned long end =3D min(start + SWAPFILE_CLUSTER, si->max); unsigned int nr_pages =3D 1 << order; - bool need_reclaim, ret; + bool need_reclaim, ret, usable; =20 lockdep_assert_held(&ci->lock); + VM_WARN_ON(!cluster_is_usable(ci, order)); =20 if (end < nr_pages || ci->count + nr_pages > SWAPFILE_CLUSTER) goto out; @@ -912,14 +931,8 @@ static unsigned int alloc_swap_scan_cluster(struct swa= p_info_struct *si, if (!cluster_scan_range(si, ci, offset, nr_pages, &need_reclaim)) continue; if (need_reclaim) { - ret =3D cluster_reclaim_range(si, ci, offset, offset + nr_pages); - /* - * Reclaim drops ci->lock and cluster could be used - * by another order. Not checking flag as off-list - * cluster has no flag set, and change of list - * won't cause fragmentation. - */ - if (!cluster_is_usable(ci, order)) + ret =3D cluster_reclaim_range(si, ci, offset, order, &usable); + if (!usable) goto out; if (cluster_is_empty(ci)) offset =3D start; --=20 2.52.0