From nobody Wed Feb 11 08:12:21 2026 Received: from mail-oo1-f54.google.com (mail-oo1-f54.google.com [209.85.161.54]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F02A3344DB0 for ; Sun, 8 Feb 2026 21:58:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.54 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770587931; cv=none; b=tQMo004A7E4mc9wWE8PwwJurfAKUuD3WCYKVn3Iyg+qKqwWaRwTpkW62KmpZjJ/n2jJ9Jxj3eivmpKjwjPvr26+i6GlmF0Ej8W7HEwsg175OomtPguChwxyLHc6h8iRpANk9a0XSuAY/jd4eqOJZIXFFZSxEDs5MwiO6m+mdp3c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770587931; c=relaxed/simple; bh=GETDChp4MZA6lB2IvMCiWU0XpNLogukkMxKKyzcImvk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=VOXdWKHL5nZJ6rIT+5E8gRjMTZ0DskQM/dp/9NFIZz9GQrH0fFg6qQhyisSErIVOuWd5vcsenM2Odfpbc92XTJ9+HlYHxoBH7bkdmrygMToTJ7/w+icRerDkK2rofAfG2hDXaVLrTdsCZOx+YWdtG3v5gwAn3zsGkUsorYzX/xw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=d0EZ3oZM; arc=none smtp.client-ip=209.85.161.54 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="d0EZ3oZM" Received: by mail-oo1-f54.google.com with SMTP id 006d021491bc7-66f3e7d9eccso600077eaf.1 for ; Sun, 08 Feb 2026 13:58:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1770587930; x=1771192730; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=bkTf0BqWuQBW5wS4SMRRzp+y8TABCQWi3klTIzIW13I=; b=d0EZ3oZMJ/bjUA73w0evJ3XIgId+dorjCIjtMM/OWb+GQBzymPrAYB2o+edrZjUmS9 AwiuQYjvTBKpffwSCOLFEN+lKo/vIwWOgMzlTko8FcxPo/ZDw64rhM1HnchHBXQqNoJV hrroEIZCpdDVUh0kgTWRjhVr58FoFRkVR/bfK7qigDDZBKWlNRgo/9DDzP7daAygPuh0 J1JlFxRi/VEUIdrN6L9ikBRU/nltJ4wpUCnoQqvVYi15GOcxzGpyLSn8g/Q0rUUbjRoj rhSBrTDI5rr6RpUViHqRwIYQIh7TWPdPm+ca/ts8OrTDUcyWz0+Z97DVPPEeAVu4WVwx 1/gQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770587930; x=1771192730; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=bkTf0BqWuQBW5wS4SMRRzp+y8TABCQWi3klTIzIW13I=; b=jDiLGdE093rC+8fYzb11wGgruImEOq9SJnKKRS5XoqiQmtSv7bSwQtAHYXdUVflCB0 mGI3cumdjQrI3JZxI0jFK0E6smphUETcFlxlhHPVZWdQuSVP1HI9x68GPWghah4FISbC yDviaASeDqQZA4Pt9syWQGgfH5ehoZtjPtWQaRQFWMnXLts9nWuAZZyEbLbNTLhdgZoC Hi5nboxbab9vCoKccjCEw3PkMQQy9cfCE186dzPuBSEKyjdXS3XM5G1O0uPO8LR/a61k 2+tuv8bOKbm2qiLr9fMjYPldu6OmW8b/amsSgngE0HpO3R6n+NwVQ00XaSseli5gfL3m Z0EQ== X-Forwarded-Encrypted: i=1; AJvYcCV+RIpcUNLmaqKc3o9oM9xhVzzn0rMzm9UnlE7c+PwUTpuIxC7PsahZrsMsAPHX0H+6WimIGa20bew3/Wo=@vger.kernel.org X-Gm-Message-State: AOJu0YwiFIwIFMFenRMc4beA1ObGtQxWt/H0akwNnFiFUixcloRC2bdO FUvdgh9fraqIkwUXtqBL9XDOlnMkQSF0y/hfsLIVDxVwrtv+tTH50t32 X-Gm-Gg: AZuq6aJGXrpOw0zL0X6wkskFSJcH+2fJFzanlbVp4GesZeF+YVJCeaST14BRz25d9HS BhvOvOjJZ3PVBhc/RGguBqEGzQktEbgkpYHZ4vUKRcJSoSxBow262L3O22JfYt0jANw2EfHYJSi o4eTbcinzzXSdltyq4OQYCzwcHtkCMcKV8WnnSZa3LOMFwyyax9Cc0mLehVKOZKbITnkbb+i4Fj QVOpkTdzipv5LnloE5bKb2FzW0FkUfJc9u63Xl6cflYGNFlSN1+OSsUVJFXE4LLLrRaN3EE+tkd fyovabhgMPEm5TqZf5mFZ9lQp7KVuA8qrMHCDnC4+hJu2UAmnKe1C6jlvjFocJPrWgsL12DMbLs OFRu5dX4xgONFXoav+QA+/x6JsGhRBzMJFFX6L5iGktpWkcb7QyOwi4AgHx1/0nB42ZrWlWf9fD 8dipQQKr/I8V98rR1TiiqVOhUvmdmO20gluw== X-Received: by 2002:a05:6820:3088:b0:663:888:6093 with SMTP id 006d021491bc7-66d0ce25452mr4103771eaf.70.1770587929794; Sun, 08 Feb 2026 13:58:49 -0800 (PST) Received: from localhost ([2a03:2880:10ff:56::]) by smtp.gmail.com with ESMTPSA id 006d021491bc7-66d391ae639sm4736638eaf.7.2026.02.08.13.58.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 08 Feb 2026 13:58:49 -0800 (PST) From: Nhat Pham To: linux-mm@kvack.org Cc: akpm@linux-foundation.org, hannes@cmpxchg.org, hughd@google.com, yosry.ahmed@linux.dev, mhocko@kernel.org, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, len.brown@intel.com, chengming.zhou@linux.dev, kasong@tencent.com, chrisl@kernel.org, huang.ying.caritas@gmail.com, ryan.roberts@arm.com, shikemeng@huaweicloud.com, viro@zeniv.linux.org.uk, baohua@kernel.org, bhe@redhat.com, osalvador@suse.de, lorenzo.stoakes@oracle.com, christophe.leroy@csgroup.eu, pavel@kernel.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-pm@vger.kernel.org, peterx@redhat.com, riel@surriel.com, joshua.hahnjy@gmail.com, npache@redhat.com, gourry@gourry.net, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, rafael@kernel.org, jannh@google.com, pfalcato@suse.de, zhengqi.arch@bytedance.com Subject: [PATCH v3 03/20] mm: swap: add an abstract API for locking out swapoff Date: Sun, 8 Feb 2026 13:58:16 -0800 Message-ID: <20260208215839.87595-4-nphamcs@gmail.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260208215839.87595-1-nphamcs@gmail.com> References: <20260208215839.87595-1-nphamcs@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently, we get a reference to the backing swap device in order to prevent swapoff from freeing the metadata of a swap entry. This does not make sense in the new virtual swap design, especially after the swap backends are decoupled - a swap entry might not have any backing swap device at all, and its backend might change at any time during its lifetime. In preparation for this, abstract away the swapoff locking out behavior into a generic API. Signed-off-by: Nhat Pham --- include/linux/swap.h | 17 +++++++++++++++++ mm/memory.c | 13 +++++++------ mm/mincore.c | 15 +++------------ mm/shmem.c | 12 ++++++------ mm/swap_state.c | 14 +++++++------- mm/userfaultfd.c | 15 +++++++++------ mm/zswap.c | 5 ++--- 7 files changed, 51 insertions(+), 40 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index aa29d8ac542d1..3da637b218baf 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -659,5 +659,22 @@ static inline bool mem_cgroup_swap_full(struct folio *= folio) } #endif =20 +static inline bool tryget_swap_entry(swp_entry_t entry, + struct swap_info_struct **sip) +{ + struct swap_info_struct *si =3D get_swap_device(entry); + + if (sip) + *sip =3D si; + + return si; +} + +static inline void put_swap_entry(swp_entry_t entry, + struct swap_info_struct *si) +{ + put_swap_device(si); +} + #endif /* __KERNEL__*/ #endif /* _LINUX_SWAP_H */ diff --git a/mm/memory.c b/mm/memory.c index da360a6eb8a48..90031f833f52e 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4630,6 +4630,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) struct swap_info_struct *si =3D NULL; rmap_t rmap_flags =3D RMAP_NONE; bool need_clear_cache =3D false; + bool swapoff_locked =3D false; bool exclusive =3D false; softleaf_t entry; pte_t pte; @@ -4698,8 +4699,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) } =20 /* Prevent swapoff from happening to us. */ - si =3D get_swap_device(entry); - if (unlikely(!si)) + swapoff_locked =3D tryget_swap_entry(entry, &si); + if (unlikely(!swapoff_locked)) goto out; =20 folio =3D swap_cache_get_folio(entry); @@ -5047,8 +5048,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (waitqueue_active(&swapcache_wq)) wake_up(&swapcache_wq); } - if (si) - put_swap_device(si); + if (swapoff_locked) + put_swap_entry(entry, si); return ret; out_nomap: if (vmf->pte) @@ -5066,8 +5067,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (waitqueue_active(&swapcache_wq)) wake_up(&swapcache_wq); } - if (si) - put_swap_device(si); + if (swapoff_locked) + put_swap_entry(entry, si); return ret; } =20 diff --git a/mm/mincore.c b/mm/mincore.c index e5d13eea92347..f3eb771249d67 100644 --- a/mm/mincore.c +++ b/mm/mincore.c @@ -77,19 +77,10 @@ static unsigned char mincore_swap(swp_entry_t entry, bo= ol shmem) if (!softleaf_is_swap(entry)) return !shmem; =20 - /* - * Shmem mapping lookup is lockless, so we need to grab the swap - * device. mincore page table walk locks the PTL, and the swap - * device is stable, avoid touching the si for better performance. - */ - if (shmem) { - si =3D get_swap_device(entry); - if (!si) - return 0; - } + if (!tryget_swap_entry(entry, &si)) + return 0; folio =3D swap_cache_get_folio(entry); - if (shmem) - put_swap_device(si); + put_swap_entry(entry, si); /* The swap cache space contains either folio, shadow or NULL */ if (folio && !xa_is_value(folio)) { present =3D folio_test_uptodate(folio); diff --git a/mm/shmem.c b/mm/shmem.c index 1db97ef2d14eb..b40be22fa5f09 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2307,7 +2307,7 @@ static int shmem_swapin_folio(struct inode *inode, pg= off_t index, softleaf_t index_entry; struct swap_info_struct *si; struct folio *folio =3D NULL; - bool skip_swapcache =3D false; + bool swapoff_locked, skip_swapcache =3D false; int error, nr_pages, order; pgoff_t offset; =20 @@ -2319,16 +2319,16 @@ static int shmem_swapin_folio(struct inode *inode, = pgoff_t index, if (softleaf_is_poison_marker(index_entry)) return -EIO; =20 - si =3D get_swap_device(index_entry); + swapoff_locked =3D tryget_swap_entry(index_entry, &si); order =3D shmem_confirm_swap(mapping, index, index_entry); - if (unlikely(!si)) { + if (unlikely(!swapoff_locked)) { if (order < 0) return -EEXIST; else return -EINVAL; } if (unlikely(order < 0)) { - put_swap_device(si); + put_swap_entry(index_entry, si); return -EEXIST; } =20 @@ -2448,7 +2448,7 @@ static int shmem_swapin_folio(struct inode *inode, pg= off_t index, } folio_mark_dirty(folio); swap_free_nr(swap, nr_pages); - put_swap_device(si); + put_swap_entry(swap, si); =20 *foliop =3D folio; return 0; @@ -2466,7 +2466,7 @@ static int shmem_swapin_folio(struct inode *inode, pg= off_t index, swapcache_clear(si, folio->swap, folio_nr_pages(folio)); if (folio) folio_put(folio); - put_swap_device(si); + put_swap_entry(swap, si); =20 return error; } diff --git a/mm/swap_state.c b/mm/swap_state.c index 34c9d9b243a74..bece18eb540fa 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -538,8 +538,7 @@ struct folio *read_swap_cache_async(swp_entry_t entry, = gfp_t gfp_mask, pgoff_t ilx; struct folio *folio; =20 - si =3D get_swap_device(entry); - if (!si) + if (!tryget_swap_entry(entry, &si)) return NULL; =20 mpol =3D get_vma_policy(vma, addr, 0, &ilx); @@ -550,7 +549,7 @@ struct folio *read_swap_cache_async(swp_entry_t entry, = gfp_t gfp_mask, if (page_allocated) swap_read_folio(folio, plug); =20 - put_swap_device(si); + put_swap_entry(entry, si); return folio; } =20 @@ -763,6 +762,7 @@ static struct folio *swap_vma_readahead(swp_entry_t tar= g_entry, gfp_t gfp_mask, for (addr =3D start; addr < end; ilx++, addr +=3D PAGE_SIZE) { struct swap_info_struct *si =3D NULL; softleaf_t entry; + bool swapoff_locked =3D false; =20 if (!pte++) { pte =3D pte_offset_map(vmf->pmd, addr); @@ -781,14 +781,14 @@ static struct folio *swap_vma_readahead(swp_entry_t t= arg_entry, gfp_t gfp_mask, * holding a reference to, try to grab a reference, or skip. */ if (swp_type(entry) !=3D swp_type(targ_entry)) { - si =3D get_swap_device(entry); - if (!si) + swapoff_locked =3D tryget_swap_entry(entry, &si); + if (!swapoff_locked) continue; } folio =3D __read_swap_cache_async(entry, gfp_mask, mpol, ilx, &page_allocated, false); - if (si) - put_swap_device(si); + if (swapoff_locked) + put_swap_entry(entry, si); if (!folio) continue; if (page_allocated) { diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index e6dfd5f28acd7..25f89eba0438c 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -1262,9 +1262,11 @@ static long move_pages_ptes(struct mm_struct *mm, pm= d_t *dst_pmd, pmd_t *src_pmd pte_t *dst_pte =3D NULL; pmd_t dummy_pmdval; pmd_t dst_pmdval; + softleaf_t entry; struct folio *src_folio =3D NULL; struct mmu_notifier_range range; long ret =3D 0; + bool swapoff_locked =3D false; =20 mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, src_addr, src_addr + len); @@ -1429,7 +1431,7 @@ static long move_pages_ptes(struct mm_struct *mm, pmd= _t *dst_pmd, pmd_t *src_pmd len); } else { /* !pte_present() */ struct folio *folio =3D NULL; - const softleaf_t entry =3D softleaf_from_pte(orig_src_pte); + entry =3D softleaf_from_pte(orig_src_pte); =20 if (softleaf_is_migration(entry)) { pte_unmap(src_pte); @@ -1449,8 +1451,8 @@ static long move_pages_ptes(struct mm_struct *mm, pmd= _t *dst_pmd, pmd_t *src_pmd goto out; } =20 - si =3D get_swap_device(entry); - if (unlikely(!si)) { + swapoff_locked =3D tryget_swap_entry(entry, &si); + if (unlikely(!swapoff_locked)) { ret =3D -EAGAIN; goto out; } @@ -1480,8 +1482,9 @@ static long move_pages_ptes(struct mm_struct *mm, pmd= _t *dst_pmd, pmd_t *src_pmd pte_unmap(src_pte); pte_unmap(dst_pte); src_pte =3D dst_pte =3D NULL; - put_swap_device(si); + put_swap_entry(entry, si); si =3D NULL; + swapoff_locked =3D false; /* now we can block and wait */ folio_lock(src_folio); goto retry; @@ -1507,8 +1510,8 @@ static long move_pages_ptes(struct mm_struct *mm, pmd= _t *dst_pmd, pmd_t *src_pmd if (dst_pte) pte_unmap(dst_pte); mmu_notifier_invalidate_range_end(&range); - if (si) - put_swap_device(si); + if (swapoff_locked) + put_swap_entry(entry, si); =20 return ret; } diff --git a/mm/zswap.c b/mm/zswap.c index ac9b7a60736bc..315e4d0d08311 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1009,14 +1009,13 @@ static int zswap_writeback_entry(struct zswap_entry= *entry, int ret =3D 0; =20 /* try to allocate swap cache folio */ - si =3D get_swap_device(swpentry); - if (!si) + if (!tryget_swap_entry(swpentry, &si)) return -EEXIST; =20 mpol =3D get_task_policy(current); folio =3D __read_swap_cache_async(swpentry, GFP_KERNEL, mpol, NO_INTERLEAVE_INDEX, &folio_was_allocated, true); - put_swap_device(si); + put_swap_entry(swpentry, si); if (!folio) return -ENOMEM; =20 --=20 2.47.3