From nobody Mon Apr 6 17:25:46 2026 Received: from mail-ot1-f50.google.com (mail-ot1-f50.google.com [209.85.210.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D869638BF72 for ; Wed, 18 Mar 2026 22:30:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.50 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773873007; cv=none; b=WKwH78PaYzIB5ja1leimpyb9gUWjDCn4DGN/pWLEya7sarrFvEftZSQubyxVs0oZ0Mgw++uxL7/UT3K0IOf+KOzwQ+44QtBPRhHSVpvAcy5DmnDw54YWV57zq+SdwfW7cdR7yEcEt116gYekXyMDpYcjRFxS2HofguY03L8jHWY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773873007; c=relaxed/simple; bh=doGbf0EPfObop4nrog4GZiP31w0uYtiVtwrMZ8pit04=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=boc0xBsVTiEtoZ5B1DgtfU6FvsVcmuNf6jAVoyk7ZU0Jm3dUpopOJnZ8eXXixZ7ouQIBrsMSOKZvVjiblwpaF91pCW/9jHYoSPXZCAqAwXvjYCvQbBrjdQsN/QyZIY9F0bXV43DUCLY+/EaFnpSLkqdeXQ1L55ut58wA/Wno4YA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Nwi1i24+; arc=none smtp.client-ip=209.85.210.50 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Nwi1i24+" Received: by mail-ot1-f50.google.com with SMTP id 46e09a7af769-7d743cd9e5bso238395a34.2 for ; Wed, 18 Mar 2026 15:30:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1773873004; x=1774477804; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2la7S1lv8SZ13es345xjBt9aiwYWClPAry0jC6dTWYc=; b=Nwi1i24+ajqqGDUxg7y8XL1Ci0ZGtH6iOZmOpfK8C5oVVuP5JifUL6ktqaf6SZEQWx QMRzWx8rKXx/qh8PjdCZL2TokPhZTiGkwhKpq9UIjAAUj4nVHmemNjK1joTl3OqE5J+8 TEWyvwWkCwU5tNra3sAebb50E7G9vWXOkqqPg9YtAH7PG6UQfPEgZR7ShcIjXD3RafZJ 4hbaiepYFDIuSKMiSI7PTebxPp3B01g8m3d4dmlIYcc5I9W6Az2c5leEnv868vXBm3/+ U7NlmL2IAEHhhZF1bAf3jmm4CZg/htOGogIWdvJjasuMTtKbqVxRqu6G+gwmzXjdzCuk 7q/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1773873004; x=1774477804; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=2la7S1lv8SZ13es345xjBt9aiwYWClPAry0jC6dTWYc=; b=gXQNwa52cZYA93Svqom5Bmb6wswJ6a595gaXRHRg0+nFVez+FfFkdLb0JsQA7XZtaE Qo8w83CFKKhCN67rWdMn2K99gUr2aRxmHpJ6DSq7qLw7MxrjjWJ+jzbIB+fW+9IHkyXZ Ohk8u1f/eEGw8M/A8bjIdKRsoBk6AsjYZq6/Ip/jP9l4Vtwl2MdpqcP5zdvc0PsG9cQ5 n3C7N5KPq3M4hrz36iCP9HblgkNjpmFkOife2HVE5ws0Swe3QKUxw4cfpeDNgMq2mQ+j 11tAgjqHb8t7T8YoxBSyEawvRDDMyX5EnFWy/Sr5MgDEbt3UDvDy0BwjodeDqNI1usoZ JwQg== X-Forwarded-Encrypted: i=1; AJvYcCWWePaRepoA2j2yGb4TqzPXZqp2Bp/URcNzHWo8jELUw5hmKzknyyvxziLDhVT0Ebq8ZQjNOKG8p8AmjWU=@vger.kernel.org X-Gm-Message-State: AOJu0YzAjcMn4a8o+m2uJB3r7kAkOZ79iQdT2P5cE0EqHbRy1O285R5k t1HAk5XLsr5v798uAWwpwqYJe7bnsYfvbeBXxZ5KeRtVpT51B9AinBb+ X-Gm-Gg: ATEYQzzvREBfwQqYNIDcTcfF05U0llgRznQXs3twb+TOg2ysLLX5JRB9odyxXT9LxUL 9QeF5nOLMJLii0Ci8rHPhNN5o5EjXxIaVZkzpeVUIl9ViyOwGWkV2Q5281MH/LNcwx2QNV7sjyg U+nZ0MkyETbnH8Bg5RSopkLiR7//SVtOabolgtgchvgklaJanGKQaefy1w6rX3DGyJ0amkSnnzJ QFVCpYWm/sk3hlU6M8/rPvZObmgQGytmMEwNmN2vvTR4cGUYWm781dwk7pexcAmGtJIK8wCmUhj VUcRoOTTH3WM7nftImfLwE8ih9905OwHFHzgXv5uV4v/cUtj2u6wUAC20tVYO9Z93HXvTNkebAx cmW0rfqEdR5GNe1RHX8m6jU9hsLojN51kF3GZj7beiGP4X5Rjl8c5dthsHdX/y7kgpKQ2L3+Ps8 /u+mHFMtSOfJB8Ji4GAY12bYDzCdo+Y+DHCijWpElfeUb5 X-Received: by 2002:a05:6830:498b:b0:7d4:96c3:3f97 with SMTP id 46e09a7af769-7d7ca566cf8mr3460497a34.2.1773873003460; Wed, 18 Mar 2026 15:30:03 -0700 (PDT) Received: from localhost ([2a03:2880:10ff:6::]) by smtp.gmail.com with ESMTPSA id 46e09a7af769-7d7c9be847fsm3020992a34.27.2026.03.18.15.30.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Mar 2026 15:30:02 -0700 (PDT) From: Nhat Pham To: kasong@tencent.com Cc: Liam.Howlett@oracle.com, akpm@linux-foundation.org, apopple@nvidia.com, axelrasmussen@google.com, baohua@kernel.org, baolin.wang@linux.alibaba.com, bhe@redhat.com, byungchul@sk.com, cgroups@vger.kernel.org, chengming.zhou@linux.dev, chrisl@kernel.org, corbet@lwn.net, david@kernel.org, dev.jain@arm.com, gourry@gourry.net, hannes@cmpxchg.org, hughd@google.com, jannh@google.com, joshua.hahnjy@gmail.com, lance.yang@linux.dev, lenb@kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-pm@vger.kernel.org, lorenzo.stoakes@oracle.com, matthew.brost@intel.com, mhocko@suse.com, muchun.song@linux.dev, npache@redhat.com, nphamcs@gmail.com, pavel@kernel.org, peterx@redhat.com, peterz@infradead.org, pfalcato@suse.de, rafael@kernel.org, rakie.kim@sk.com, roman.gushchin@linux.dev, rppt@kernel.org, ryan.roberts@arm.com, shakeel.butt@linux.dev, shikemeng@huaweicloud.com, surenb@google.com, tglx@kernel.org, vbabka@suse.cz, weixugc@google.com, ying.huang@linux.alibaba.com, yosry.ahmed@linux.dev, yuanchu@google.com, zhengqi.arch@bytedance.com, ziy@nvidia.com, kernel-team@meta.com, riel@surriel.com Subject: [PATCH v4 05/21] mm/swap: add a new function to check if a swap entry is in swap cached. Date: Wed, 18 Mar 2026 15:29:36 -0700 Message-ID: <20260318222953.441758-6-nphamcs@gmail.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260318222953.441758-1-nphamcs@gmail.com> References: <20260318222953.441758-1-nphamcs@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Userfaultfd checks whether a swap entry is in swapcache. This is currently done by directly looking at the swapfile's swap map - however, the swap cached state will soon be managed at the virtual swap layer. Abstract away this function. Signed-off-by: Nhat Pham --- include/linux/swap.h | 6 ++++++ mm/swapfile.c | 15 +++++++++++++++ mm/userfaultfd.c | 3 +-- 3 files changed, 22 insertions(+), 2 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 3da637b218baf..f91a442ac0e82 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -435,6 +435,7 @@ void free_swap_and_cache_nr(swp_entry_t entry, int nr); int __swap_count(swp_entry_t entry); bool swap_entry_swapped(struct swap_info_struct *si, swp_entry_t entry); int swp_swapcount(swp_entry_t entry); +bool is_swap_cached(swp_entry_t entry); =20 /* Swap cache API (mm/swap_state.c) */ static inline unsigned long total_swapcache_pages(void) @@ -554,6 +555,11 @@ static inline int swp_swapcount(swp_entry_t entry) return 0; } =20 +static inline bool is_swap_cached(swp_entry_t entry) +{ + return false; +} + static inline int folio_alloc_swap(struct folio *folio) { return -EINVAL; diff --git a/mm/swapfile.c b/mm/swapfile.c index 46da28c533bbe..0471a965f222b 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -194,6 +194,21 @@ static bool swap_only_has_cache(struct swap_info_struc= t *si, return true; } =20 +/** + * is_swap_cached - check if the swap entry is cached + * @entry: swap entry to check + * + * Check swap_map directly to minimize overhead, READ_ONCE is sufficient. + * + * Returns true if the swap entry is cached, false otherwise. + */ +bool is_swap_cached(swp_entry_t entry) +{ + struct swap_info_struct *si =3D __swap_entry_to_info(entry); + + return READ_ONCE(si->swap_map[swp_offset(entry)]) & SWAP_HAS_CACHE; +} + static bool swap_is_last_map(struct swap_info_struct *si, unsigned long offset, int nr_pages, bool *has_cache) { diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 25f89eba0438c..98be764fb3ecd 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -1190,7 +1190,6 @@ static int move_swap_pte(struct mm_struct *mm, struct= vm_area_struct *dst_vma, * Check if the swap entry is cached after acquiring the src_pte * lock. Otherwise, we might miss a newly loaded swap cache folio. * - * Check swap_map directly to minimize overhead, READ_ONCE is sufficient. * We are trying to catch newly added swap cache, the only possible case= is * when a folio is swapped in and out again staying in swap cache, using= the * same entry before the PTE check above. The PTL is acquired and releas= ed @@ -1200,7 +1199,7 @@ static int move_swap_pte(struct mm_struct *mm, struct= vm_area_struct *dst_vma, * cache, or during the tiny synchronization window between swap cache a= nd * swap_map, but it will be gone very quickly, worst result is retry jit= ters. */ - if (READ_ONCE(si->swap_map[swp_offset(entry)]) & SWAP_HAS_CACHE) { + if (is_swap_cached(entry)) { double_pt_unlock(dst_ptl, src_ptl); return -EAGAIN; } --=20 2.52.0