From nobody Fri Apr 3 01:29:49 2026 Received: from mail-pf1-f195.google.com (mail-pf1-f195.google.com [209.85.210.195]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CDE522459E7 for ; Sun, 15 Feb 2026 10:38:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.195 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771151920; cv=none; b=doGAr6hCsSzXdDk+FOAaq0S19/Lkb2BlyrPowIoO/hRqjDIg85BXQZBmwuVcrkgMaShPr9Zf6el3kGs7LLPOK7c610prt8xRcYTCowGsSSCYhWKU39EECTF/6Vw8MM+iiU2ZxAPxqSyQnRPoVCS3FU+K45gkuFAJsLJD4ei0Fso= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771151920; c=relaxed/simple; bh=p+cec/aodda6p9z15PbtOD06+rlGgjcagQZkqKkBgJA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=DFOf/2E3xI5Qs5LUhQyvpBR/V6ukloFsOe0F489L1h4Ooi04en0PjRc2elJ/w+7pQbiG6/VHbJcsswnMkunpUSv0StaKxaBdNPBnmIbyNh3LddPw76TysP8D67BQYNcUb7Bvd9aAnGzbBH6H+A2RSonbVuSIEq00KbWIgkTD9IY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ikrL8ZZt; arc=none smtp.client-ip=209.85.210.195 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ikrL8ZZt" Received: by mail-pf1-f195.google.com with SMTP id d2e1a72fcca58-824484dba4dso2078531b3a.0 for ; Sun, 15 Feb 2026 02:38:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1771151917; x=1771756717; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=9xfddMpTBkv4yU1TaSitiYJz3pzQM97yNutklwj4FDs=; b=ikrL8ZZt+yX5EOMNgPMqs99pdNwwtv58dNnFBukBHlgvnfmpXNTLDfD71pi0sN4Nn+ 02N/wK7f6E1mDdSvzQTlaD+85RJU2u+lOQXMiK4OOlvgvjX/dmcDIxR53PLPIm/uSZvs 9KiCfaOY0dannL0wMSp2kfCHpRqon7BS0pzCwPmslox1ffIWd2JN6Eu12PK05BJM/3Cr wjnyzCaP0QLAVvGRhrbnvTdVpRJybZA3gVE+tQZFbejv1xXjvJaZhJBNF3fDiegLKEPE HNOCUo6lF4mh0vZ+HsEMsbOzJJczdDPHxIP01Kb8Bb9LZEhzzINkttMy7fuNv3oCS3TF 1grw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1771151917; x=1771756717; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-gg :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9xfddMpTBkv4yU1TaSitiYJz3pzQM97yNutklwj4FDs=; b=PpOHKYlDYbAUgvJsiwWPAsH+siBhHzBvu6gApxnrnX2ceqSCTxnNTyI92SEIfMF8ZC TABilBoy5R02UcVZyXc6iycvSIWA84BCMSvyixqjqZcOJn5vL8Dy1TxCFJkf1ZRXQY3B wsFkwZdNKEBqFyrbnLny3CZYIBNHAiWi17Zig6fYwzErlI43l7NLYuHswxKdNOaG448n 4+ofUcUxo2kL6wsHJif6SgCmS6xAjSyWDl6QmfhMv1VAL0wV56vaY5RUed6gkKXEWI4U G8ZiN6vKmjunTHuIXDCut1Y2m+OW7y8aBP144hOMxmOt0ocTlAFxE9iqSaz8t/RRt5/G hDdw== X-Forwarded-Encrypted: i=1; AJvYcCUfFHVjDS8ZDFGvfu72tRuD799LWYZYzfc/Q3QW7LHnAqnYnOICaT0DFutbfMSFBnybvZunavQSUWnFsfA=@vger.kernel.org X-Gm-Message-State: AOJu0YxM4SUcykIw1oAQAaFqPqxd1uG0L7zTeWrWfVP1V0LVJ6NetnL5 g6voqzdusQPZ0JNiCyR/cilWXpvgiQ/DqOs3wLUsxZdVzEarJ5IpSP6Z+h0FP+tuK7lg0Q== X-Gm-Gg: AZuq6aJNS5ZhuGbyK2sUQZxSfFgHqvcUR9I8JClFxWea4hYaeig5cFA2nH7UdiqEAvB LD8GRlmJa30IUaVVUC81o4jxAAgTjnJpia0XWTXvrnHU9qcpFeWfETyJBd6RGF2QXvWsoR6fNAp Sr5BX6ljH6tovXqIggQvzrx5skD2n1Al9ESNExbB4/AhpcS5KqZcCS0gQvawmN6CjnmfMb2Xu+b rMbNsq3S2Dxm9aLGHtgTYzEfzdkDorNQey/sYLbMJ8P6GAwF88kvaXiRTH2kbHxovyZztL70scd VZYipgPMM47Wn2YgroAh8Q+ii5mrmuuIZAbZ8eblVWkdN5hSrEatb1gGF34xwq/qanmBye/mBk+ 8Kx+sW6viutdcj1GA1we363F7ynROCkyX1nEO6jQyUZukfpbrR+GSu9U6o6+ERyPujDLCO2qVuZ rE0teHG4PbJ+AF8cINExC33qabDHG0zkMyA394Uyh1WHkRa1pjRi7RRXuKigZAeIzNq1LF+42aF MI= X-Received: by 2002:a05:6a00:3699:b0:81f:eb35:69ef with SMTP id d2e1a72fcca58-824c95be066mr7117872b3a.40.1771151917075; Sun, 15 Feb 2026 02:38:37 -0800 (PST) Received: from KASONG-MC4.tencent.com ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-824c6bb5acesm8165660b3a.63.2026.02.15.02.38.33 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 15 Feb 2026 02:38:36 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Chris Li , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , "Rafael J. Wysocki" , Carsten Grohmann , linux-kernel@vger.kernel.org, "open list:SUSPEND TO RAM" , Kairui Song Subject: [PATCH 1/2] mm, swap: simplify checking if a folio is swapped Date: Sun, 15 Feb 2026 18:38:14 +0800 Message-ID: <20260215103815.87329-1-ryncsn@gmail.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260215-hibernate-perf-v1-0-f55ee9ee67db@tencent.com> References: <20260215-hibernate-perf-v1-0-f55ee9ee67db@tencent.com> Reply-To: Kairui Song Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kairui Song Clean up and simplify how we check if a folio is swapped. The helper already requires the folio to be in swap cache and locked. That's enough to pin the swap cluster from being freed, so there is no need to lock anything else to avoid UAF. And besides, we have cleaned up and defined the swap operation to be mostly folio based, and now the only place a folio will have any of its swap slots' count increased from 0 to 1 is folio_dup_swap, which also requires the folio lock. So as we are holding the folio lock here, a folio can't change its swap status from not swapped (all swap slots have a count of 0) to swapped (any slot has a swap count larger than 0). So there won't be any false negatives of this helper if we simply depend on the folio lock to stabilize the cluster. We are only using this helper to determine if we can and should release the swap cache. So false positives are completely harmless, and also already exist before. Depending on the timing, previously, it's also possible that a racing thread releases the swap count right after releasing the ci lock and before this helper returns. In any case, the worst that could happen is we leave a clean swap cache. It will still be reclaimed when under pressure just fine. So, in conclusion, we can simplify and make the check much simpler and lockless. Also, rename it to folio_maybe_swapped to reflect the design. Signed-off-by: Kairui Song Tested-by: Carsten Grohmann --- mm/swap.h | 5 ++-- mm/swapfile.c | 82 ++++++++++++++++++++++++++++----------------------- 2 files changed, 48 insertions(+), 39 deletions(-) diff --git a/mm/swap.h b/mm/swap.h index 9fc5fecdcfdf..3ee761ee8348 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -195,12 +195,13 @@ extern int swap_retry_table_alloc(swp_entry_t entry, = gfp_t gfp); * * folio_alloc_swap(): the entry point for a folio to be swapped * out. It allocates swap slots and pins the slots with swap cache. - * The slots start with a swap count of zero. + * The slots start with a swap count of zero. The slots are pinned + * by swap cache reference which doesn't contribute to swap count. * * folio_dup_swap(): increases the swap count of a folio, usually * during it gets unmapped and a swap entry is installed to replace * it (e.g., swap entry in page table). A swap slot with swap - * count =3D=3D 0 should only be increasd by this helper. + * count =3D=3D 0 can only be increased by this helper. * * folio_put_swap(): does the opposite thing of folio_dup_swap(). */ diff --git a/mm/swapfile.c b/mm/swapfile.c index 9628015fd8cf..cb18960a6089 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1743,7 +1743,11 @@ int folio_alloc_swap(struct folio *folio) * @subpage: if not NULL, only increase the swap count of this subpage. * * Typically called when the folio is unmapped and have its swap entry to - * take its palce. + * take its place: Swap entries allocated to a folio has count =3D=3D 0 an= d pinned + * by swap cache. The swap cache pin doesn't increase the swap count. This + * helper sets the initial count =3D=3D 1 and increases the count as the f= olio is + * unmapped and swap entries referencing the slots are generated to replace + * the folio. * * Context: Caller must ensure the folio is locked and in the swap cache. * NOTE: The caller also has to ensure there is no raced call to @@ -1944,49 +1948,44 @@ int swp_swapcount(swp_entry_t entry) return count < 0 ? 0 : count; } =20 -static bool swap_page_trans_huge_swapped(struct swap_info_struct *si, - swp_entry_t entry, int order) +/* + * folio_maybe_swapped - Test if a folio covers any swap slot with count >= 0. + * + * Check if a folio is swapped. Holding the folio lock ensures the folio w= on't + * go from not-swapped to swapped because the initial swap count increment= can + * only be done by folio_dup_swap, which also locks the folio. But a concu= rrent + * decrease of swap count is possible through swap_put_entries_direct, so = this + * may return a false positive. + * + * Context: Caller must ensure the folio is locked and in the swap cache. + */ +static bool folio_maybe_swapped(struct folio *folio) { + swp_entry_t entry =3D folio->swap; struct swap_cluster_info *ci; - unsigned int nr_pages =3D 1 << order; - unsigned long roffset =3D swp_offset(entry); - unsigned long offset =3D round_down(roffset, nr_pages); - unsigned int ci_off; - int i; + unsigned int ci_off, ci_end; bool ret =3D false; =20 - ci =3D swap_cluster_lock(si, offset); - if (nr_pages =3D=3D 1) { - ci_off =3D roffset % SWAPFILE_CLUSTER; - if (swp_tb_get_count(__swap_table_get(ci, ci_off))) - ret =3D true; - goto unlock_out; - } - for (i =3D 0; i < nr_pages; i++) { - ci_off =3D (offset + i) % SWAPFILE_CLUSTER; - if (swp_tb_get_count(__swap_table_get(ci, ci_off))) { - ret =3D true; - break; - } - } -unlock_out: - swap_cluster_unlock(ci); - return ret; -} - -static bool folio_swapped(struct folio *folio) -{ - swp_entry_t entry =3D folio->swap; - struct swap_info_struct *si; - VM_WARN_ON_ONCE_FOLIO(!folio_test_locked(folio), folio); VM_WARN_ON_ONCE_FOLIO(!folio_test_swapcache(folio), folio); =20 - si =3D __swap_entry_to_info(entry); - if (!IS_ENABLED(CONFIG_THP_SWAP) || likely(!folio_test_large(folio))) - return swap_entry_swapped(si, entry); + ci =3D __swap_entry_to_cluster(entry); + ci_off =3D swp_cluster_offset(entry); + ci_end =3D ci_off + folio_nr_pages(folio); + /* + * Extra locking not needed, folio lock ensures its swap entries + * won't be released, the backing data won't be gone either. + */ + rcu_read_lock(); + do { + if (__swp_tb_get_count(__swap_table_get(ci, ci_off))) { + ret =3D true; + break; + } + } while (++ci_off < ci_end); + rcu_read_unlock(); =20 - return swap_page_trans_huge_swapped(si, entry, folio_order(folio)); + return ret; } =20 static bool folio_swapcache_freeable(struct folio *folio) @@ -2032,7 +2031,7 @@ bool folio_free_swap(struct folio *folio) { if (!folio_swapcache_freeable(folio)) return false; - if (folio_swapped(folio)) + if (folio_maybe_swapped(folio)) return false; =20 swap_cache_del_folio(folio); @@ -3710,6 +3709,8 @@ void si_swapinfo(struct sysinfo *val) * * Context: Caller must ensure there is no race condition on the reference * owner. e.g., locking the PTL of a PTE containing the entry being increa= sed. + * Also the swap entry must have a count >=3D 1. Otherwise folio_dup_swap = should + * be used. */ int swap_dup_entry_direct(swp_entry_t entry) { @@ -3721,6 +3722,13 @@ int swap_dup_entry_direct(swp_entry_t entry) return -EINVAL; } =20 + /* + * The caller must be increasing the swap count from a direct + * reference of the swap slot (e.g. a swap entry in page table). + * So the swap count must be >=3D 1. + */ + VM_WARN_ON_ONCE(!swap_entry_swapped(si, entry)); + return swap_dup_entries_cluster(si, swp_offset(entry), 1); } =20 --=20 2.52.0 From nobody Fri Apr 3 01:29:49 2026 Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E2A551E1C11 for ; Sun, 15 Feb 2026 10:25:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771151138; cv=none; b=Q67LSiRSJAeao4R3CCXVx7e7hwpSIRWg7AP268QAhGVXmoylzo04IQRhl41EWMg/eSPkcWhwMYbHohQVfLEtDOs9T4g7U+I9/2BDAmM0XEMbDowLGyLC0AQY2HKQiJdtOCyU/IXl/RbOo+5TtI28A0ZbKWHl99SF4oZWTJm/u2o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771151138; c=relaxed/simple; bh=IL0ED4azP5+tZE7VbwZD67kNLIUExaCqMw8oEUVZTjw=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=IGkp/Mh8Yqdj8mbdl0CUYpLRmn99NvgcE2M7193Jlm4hFC3Ad4uUbRBP9RmrNL/7EqVU29Abef9hyWCWXDAWu5tETrqP4Zs2ucOa8QMgXgXWfJQa1fTH3+jre45VYb0F4nnpXGRB3Vpwp90TDx6j/KSyo8pXNbsv5cZlDfdj5SA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=c/hggjNY; arc=none smtp.client-ip=209.85.214.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="c/hggjNY" Received: by mail-pl1-f174.google.com with SMTP id d9443c01a7336-2ad20007c73so3068205ad.2 for ; Sun, 15 Feb 2026 02:25:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1771151136; x=1771755936; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=M5ZfG29FZ8wVjzGY2sY5PX3vPhnOYrCYILyRw0LteQc=; b=c/hggjNYp7cEK7LSVVDr0yLHz8XAvcmeE5tOSF6P1yznDsvdVwKAhi8rD8/bKNtxjU AyvEI5yYEMEgHZB6XgBSPdIgljFzW4CbNT4kdPf1ER4PjWCH4dUT0XR/EML0GiRY47G7 KbMn4plJh5YkfbM3Fuf329SkFpZJAQDsW7l+pNDj95X4lUuKi1I52rP1raN0ksBOpSQW C3NYXpfH/cmRS8xFM83E2LNjrzv7H2R8s0eggKZCp2xblHq7I2BvqiqY/+Yvnk9EF/st SgiQO5qgPiXYM6ROrpxoYpncmA5c+YyZlZVC4jVkBdG/AQG9Raq0l2VIXgSw0WOMfAht GM5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1771151136; x=1771755936; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=M5ZfG29FZ8wVjzGY2sY5PX3vPhnOYrCYILyRw0LteQc=; b=gW2dO4HERXD5WlOVj3UyW3wmG8v1fTibHYlsK+UNWrjRfWaM1SbxQYpxdl9GteGkas 7sb+XIr6A9XntOF+vNqPTm3JtOHSJVr8OrECl5bdNebLcX6PfTjyK8HSG8I0XhiUbuoy cUc3HL69Er22uuVm2Fu2mldsQ/WQ+wL7sxZWuubq5K7249WGoNuPb+AKxsplPe66rX1p AyJIVZzB9Vp1OHeWgRc5qHUPqV0A92jBJJ1fNe1wPF+5OjG8q2RWcBJ7RasP9szT4XAB IGXeWZSow8bvdxGiyS1S4+1ANg7EajYB+O0hrPjQNXU7GTjhCLXc+j5P81/84ZtH0/a1 6A3w== X-Forwarded-Encrypted: i=1; AJvYcCW4SL/Nxjb4OoMDXN9CRrMHT8cxx6v7zR2OEo3SIU4apa6g6qdZIty4XFVzL8gf24lJopZXjbaASi4dpuY=@vger.kernel.org X-Gm-Message-State: AOJu0Yy6h/32bNXUtVp5YpI9xNxawDmmjoWrOeL7toO4CgGitlM6iYdL LIic1ClGvb6jYUhpkybaA9d6VpUKbHqXL+cny9/mXwCncjMiUx8dpDxd X-Gm-Gg: AZuq6aKLmzSUFg2+E9PxIX5d413CodjN8M1A3+nUXQiZrZ3iuhr5lD4flg70TZowCdk NwCLSJ7AV82utPugXjGBtY0YoTIjSDxAi4dqjH2kOft8zFmQLDiLZY2TC3Ze5mc1yr9/cLNmTk7 yxuXjv/OKZ2pOmJ3KKKBbvRHyUJTnxEFR5nqkgiJTozPD9X7oSrtf6L/K7nNr9dxdjK9zpZjt6Y FwgLXZUGbzfAD9dNCLCvVgN/2NUc2zueuXXvJb7h9KTEAFJ+RBTh9/JyiGqAYWP+sywBZ2Mtjdl HD8xxuo0GNcY/GlFotvc1sE+lJiJcAZbspexPPTbDKXt1NF2VOKbQ/C+7gjxM2UBXoYvYiKuok3 CnAMbKmhqc9zmRluMA1hkTcnVDaxCPJWToblxwDZpN/WjFio1Tq/SVoLcdUChxO3c76UYLPoWZq Fknrtomcend12SO3czgOgQr4K5/JQUrnar2mjfncpREtmh3OnwEzCDNSCQPffv X-Received: by 2002:a17:903:1c2:b0:2a7:7872:8f52 with SMTP id d9443c01a7336-2ad174a9143mr58528075ad.26.1771151136229; Sun, 15 Feb 2026 02:25:36 -0800 (PST) Received: from [127.0.0.1] ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2ad1a714786sm41862045ad.31.2026.02.15.02.25.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 15 Feb 2026 02:25:35 -0800 (PST) From: Kairui Song Date: Sun, 15 Feb 2026 18:25:10 +0800 Subject: [PATCH 2/2] mm, swap: merge common convention and simplify allocation helper Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260215-hibernate-perf-v1-2-f55ee9ee67db@tencent.com> References: <20260215-hibernate-perf-v1-0-f55ee9ee67db@tencent.com> In-Reply-To: <20260215-hibernate-perf-v1-0-f55ee9ee67db@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Chris Li , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , "Rafael J. Wysocki" , Carsten Grohmann , linux-kernel@vger.kernel.org, "open list:SUSPEND TO RAM" , Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1771151125; l=2753; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=+lKDO9BRU6kc51T9mv3KyI+WElk31bKrGwbPt/FiL4w=; b=mc8fhlhZndV1qntjOVtm5dnTc05MQRd1xhCwdQbwILalsMpHVxJCLtTKLY8kHpgNXluNOsPei GAN9PELyjVmC94zru8UxwPQxV9dNA8wexNL6GEtskSPCreFXRwSc4/4 X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= From: Kairui Song almost all callers of the cluster scan helper require the: lock -> check usefulness/emptiness -> unlock routine. So merge them into the same helper to simplify the code. This should also improve the scan slightly, as a few callers didn't check the emptiness, which might help reduce fragmentation in rare cases. Signed-off-by: Kairui Song Tested-by: Carsten Grohmann --- mm/swapfile.c | 30 ++++++++---------------------- 1 file changed, 8 insertions(+), 22 deletions(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index bcac10d96fb5..03cc0ff4dc8c 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -923,11 +923,14 @@ static unsigned int alloc_swap_scan_cluster(struct sw= ap_info_struct *si, bool need_reclaim, ret, usable; =20 lockdep_assert_held(&ci->lock); - VM_WARN_ON(!cluster_is_usable(ci, order)); =20 - if (end < nr_pages || ci->count + nr_pages > SWAPFILE_CLUSTER) + if (!cluster_is_usable(ci, order) || end < nr_pages || + ci->count + nr_pages > SWAPFILE_CLUSTER) goto out; =20 + if (cluster_is_empty(ci)) + offset =3D cluster_offset(si, ci); + for (end -=3D nr_pages; offset <=3D end; offset +=3D nr_pages) { need_reclaim =3D false; if (!cluster_scan_range(si, ci, offset, nr_pages, &need_reclaim)) @@ -1060,14 +1063,7 @@ static unsigned long cluster_alloc_swap_entry(struct= swap_info_struct *si, goto new_cluster; =20 ci =3D swap_cluster_lock(si, offset); - /* Cluster could have been used by another order */ - if (cluster_is_usable(ci, order)) { - if (cluster_is_empty(ci)) - offset =3D cluster_offset(si, ci); - found =3D alloc_swap_scan_cluster(si, ci, folio, offset); - } else { - swap_cluster_unlock(ci); - } + found =3D alloc_swap_scan_cluster(si, ci, folio, offset); if (found) goto done; } @@ -1332,14 +1328,7 @@ static bool swap_alloc_fast(struct folio *folio) return false; =20 ci =3D swap_cluster_lock(si, offset); - if (cluster_is_usable(ci, order)) { - if (cluster_is_empty(ci)) - offset =3D cluster_offset(si, ci); - alloc_swap_scan_cluster(si, ci, folio, offset); - } else { - swap_cluster_unlock(ci); - } - + alloc_swap_scan_cluster(si, ci, folio, offset); put_swap_device(si); return folio_test_swapcache(folio); } @@ -1945,10 +1934,7 @@ swp_entry_t swap_alloc_hibernation_slot(int type) pcp_offset =3D this_cpu_read(percpu_swap_cluster.offset[0]); if (pcp_si =3D=3D si && pcp_offset) { ci =3D swap_cluster_lock(si, pcp_offset); - if (cluster_is_usable(ci, 0)) - offset =3D alloc_swap_scan_cluster(si, ci, NULL, pcp_offset); - else - swap_cluster_unlock(ci); + offset =3D alloc_swap_scan_cluster(si, ci, NULL, pcp_offset); } if (!offset) offset =3D cluster_alloc_swap_entry(si, NULL); --=20 2.52.0